Test Report: Docker_Linux_crio 21835

                    
                      73e6d6839bae6cdde957e116826ac4e2fc7d714a:2025-11-01:42153
                    
                

Test fail (39/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.26
35 TestAddons/parallel/Registry 13.26
36 TestAddons/parallel/RegistryCreds 0.44
37 TestAddons/parallel/Ingress 148.58
38 TestAddons/parallel/InspektorGadget 5.26
39 TestAddons/parallel/MetricsServer 5.33
41 TestAddons/parallel/CSI 49.7
42 TestAddons/parallel/Headlamp 2.84
43 TestAddons/parallel/CloudSpanner 5.3
44 TestAddons/parallel/LocalPath 8.14
45 TestAddons/parallel/NvidiaDevicePlugin 6.25
46 TestAddons/parallel/Yakd 5.26
47 TestAddons/parallel/AmdGpuDevicePlugin 6.26
97 TestFunctional/parallel/ServiceCmdConnect 603.12
117 TestFunctional/parallel/ImageCommands/ImageListYaml 2.29
120 TestFunctional/parallel/ServiceCmd/DeployApp 600.62
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.08
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.41
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.35
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
153 TestFunctional/parallel/ServiceCmd/Format 0.54
154 TestFunctional/parallel/ServiceCmd/URL 0.54
191 TestJSONOutput/pause/Command 1.94
197 TestJSONOutput/unpause/Command 1.96
248 TestPreload 425.87
271 TestPause/serial/Pause 6.13
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.38
301 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.24
310 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.27
319 TestStartStop/group/old-k8s-version/serial/Pause 6.98
321 TestStartStop/group/no-preload/serial/Pause 7.48
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.28
332 TestStartStop/group/embed-certs/serial/Pause 6.92
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.64
342 TestStartStop/group/newest-cni/serial/Pause 7.22
365 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.91
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-491859 addons disable volcano --alsologtostderr -v=1: exit status 11 (259.011586ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:31:42.949758   19065 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:31:42.949976   19065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:31:42.949989   19065 out.go:374] Setting ErrFile to fd 2...
	I1101 08:31:42.949996   19065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:31:42.950230   19065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:31:42.950530   19065 mustload.go:66] Loading cluster: addons-491859
	I1101 08:31:42.950912   19065 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:31:42.950931   19065 addons.go:607] checking whether the cluster is paused
	I1101 08:31:42.951033   19065 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:31:42.951049   19065 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:31:42.951434   19065 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:31:42.971277   19065 ssh_runner.go:195] Run: systemctl --version
	I1101 08:31:42.971345   19065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:31:42.990528   19065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:31:43.090761   19065 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:31:43.090855   19065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:31:43.120804   19065 cri.go:89] found id: "33e4e1fc1e33072f888a46aa17d3beb4e58f11877a9d27925e1e5d968eb6c903"
	I1101 08:31:43.120831   19065 cri.go:89] found id: "f04af0fd3a62dfc9f83c9abac1b7ccb6528a248e3fd3ee02ea1f2a7350778e83"
	I1101 08:31:43.120836   19065 cri.go:89] found id: "a1f3a49b7f394fcf06e3ec79eef028b31fb8f10e2d0269aa4eec27450086f2e9"
	I1101 08:31:43.120839   19065 cri.go:89] found id: "ff4bdf52bbb882d70c007d186f38568cb5286b9a2116e10107044414d1c422b0"
	I1101 08:31:43.120842   19065 cri.go:89] found id: "9d1050c081be96d28152bfd4e229378b4cc1d8c31d74f567fbc905b5e676cbe5"
	I1101 08:31:43.120845   19065 cri.go:89] found id: "2c81dda5dfe97017e1ea451a903bb723503013671dfd4ad2848dbd7ed4c00fda"
	I1101 08:31:43.120848   19065 cri.go:89] found id: "0f17b27c9fb94821e21590f954f59af583f7f28766b74bcf54fd77fd4403631f"
	I1101 08:31:43.120851   19065 cri.go:89] found id: "3070142e889654833dbabc836972d24ca0160e211e6a01dc410037b3d06aa377"
	I1101 08:31:43.120853   19065 cri.go:89] found id: "e0fe6aa919f9f7ec3e5dd5de78f0ba1c29746db4b58ff19fe034196dcb04a040"
	I1101 08:31:43.120858   19065 cri.go:89] found id: "dd32f839b496afac7e54669ede10e44b695513bd1f08cb2572d080421d76ed1f"
	I1101 08:31:43.120860   19065 cri.go:89] found id: "b8dc66998b8c65737a3fc68f94611d5a75e4841817858e50cf8f41fe3d0b9111"
	I1101 08:31:43.120887   19065 cri.go:89] found id: "c4c4e8392feed85ce6d8b52f77463bc2a8238dd093e730bd11ad824f180a3227"
	I1101 08:31:43.120892   19065 cri.go:89] found id: "a4c41d6f050f2ca6af53a5d7a6a54f2b04fb24731eca6d7272b14503b747f50d"
	I1101 08:31:43.120895   19065 cri.go:89] found id: "2b4413f8423a31353523e4d44f7675fac21836f4e3b491f3d3f19955b8251025"
	I1101 08:31:43.120899   19065 cri.go:89] found id: "73d495a359ef08303218d0bd2af8743a68b70af8ffdfadd49ac606f145b559b6"
	I1101 08:31:43.120907   19065 cri.go:89] found id: "18fc9837ab4ea8c07f85c79610c9eda88508e53a37801274e8022d17c69f1a98"
	I1101 08:31:43.120911   19065 cri.go:89] found id: "87757a0f68b4c84bb6b6a0830633d098f646e6d9b6aa521bccfaeb77b540635f"
	I1101 08:31:43.120917   19065 cri.go:89] found id: "f17c2b6b25fbc93674f08548b5309b9715e2cb6d7600ac5d6557505072b3fb3f"
	I1101 08:31:43.120922   19065 cri.go:89] found id: "c60507f296e9571f568638d8fac5f87c704242925cf7b4aa378db809c3e3176b"
	I1101 08:31:43.120925   19065 cri.go:89] found id: "4c1ad1a76dfd8c2d5f52abbb115ac02bc7871e82ffe8eed8b2bce0574ab65ce3"
	I1101 08:31:43.120934   19065 cri.go:89] found id: "808e84f4795d8f4c21c0316fc7e7437f547a0d653508d0419288257452ccf97b"
	I1101 08:31:43.120941   19065 cri.go:89] found id: "d4c72eaef44361ee96a4bd979c97bf60778beaa33b136b1407ada4c192a11240"
	I1101 08:31:43.120945   19065 cri.go:89] found id: "cdda903ada7547082c69c9584210b581ad9bfe62602052de013ddc3c59d043bc"
	I1101 08:31:43.120949   19065 cri.go:89] found id: "b29235edc538375f7d6a03229d64488db46c49997437218731a8cd64edc28444"
	I1101 08:31:43.120952   19065 cri.go:89] found id: ""
	I1101 08:31:43.120992   19065 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:31:43.135391   19065 out.go:203] 
	W1101 08:31:43.136984   19065 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:31:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:31:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:31:43.137006   19065 out.go:285] * 
	* 
	W1101 08:31:43.140199   19065 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:31:43.141993   19065 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-491859 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.365294ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-nlmgw" [81e0129d-d199-423b-a493-623cb2695a4f] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002909315s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-jncr6" [6640fd69-5d62-4d2e-acb5-66ff58f82684] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002613962s
addons_test.go:392: (dbg) Run:  kubectl --context addons-491859 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-491859 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-491859 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.783258496s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 ip
2025/11/01 08:32:03 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-491859 addons disable registry --alsologtostderr -v=1: exit status 11 (265.074247ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:32:04.009558   21148 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:32:04.009877   21148 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:04.009886   21148 out.go:374] Setting ErrFile to fd 2...
	I1101 08:32:04.009891   21148 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:04.010122   21148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:32:04.010408   21148 mustload.go:66] Loading cluster: addons-491859
	I1101 08:32:04.010736   21148 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:04.010751   21148 addons.go:607] checking whether the cluster is paused
	I1101 08:32:04.010832   21148 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:04.010844   21148 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:32:04.011293   21148 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:32:04.030688   21148 ssh_runner.go:195] Run: systemctl --version
	I1101 08:32:04.030740   21148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:32:04.049528   21148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:32:04.149903   21148 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:32:04.150022   21148 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:32:04.186332   21148 cri.go:89] found id: "33e4e1fc1e33072f888a46aa17d3beb4e58f11877a9d27925e1e5d968eb6c903"
	I1101 08:32:04.186374   21148 cri.go:89] found id: "f04af0fd3a62dfc9f83c9abac1b7ccb6528a248e3fd3ee02ea1f2a7350778e83"
	I1101 08:32:04.186380   21148 cri.go:89] found id: "a1f3a49b7f394fcf06e3ec79eef028b31fb8f10e2d0269aa4eec27450086f2e9"
	I1101 08:32:04.186384   21148 cri.go:89] found id: "ff4bdf52bbb882d70c007d186f38568cb5286b9a2116e10107044414d1c422b0"
	I1101 08:32:04.186388   21148 cri.go:89] found id: "9d1050c081be96d28152bfd4e229378b4cc1d8c31d74f567fbc905b5e676cbe5"
	I1101 08:32:04.186393   21148 cri.go:89] found id: "2c81dda5dfe97017e1ea451a903bb723503013671dfd4ad2848dbd7ed4c00fda"
	I1101 08:32:04.186398   21148 cri.go:89] found id: "0f17b27c9fb94821e21590f954f59af583f7f28766b74bcf54fd77fd4403631f"
	I1101 08:32:04.186401   21148 cri.go:89] found id: "3070142e889654833dbabc836972d24ca0160e211e6a01dc410037b3d06aa377"
	I1101 08:32:04.186405   21148 cri.go:89] found id: "e0fe6aa919f9f7ec3e5dd5de78f0ba1c29746db4b58ff19fe034196dcb04a040"
	I1101 08:32:04.186426   21148 cri.go:89] found id: "dd32f839b496afac7e54669ede10e44b695513bd1f08cb2572d080421d76ed1f"
	I1101 08:32:04.186437   21148 cri.go:89] found id: "b8dc66998b8c65737a3fc68f94611d5a75e4841817858e50cf8f41fe3d0b9111"
	I1101 08:32:04.186440   21148 cri.go:89] found id: "c4c4e8392feed85ce6d8b52f77463bc2a8238dd093e730bd11ad824f180a3227"
	I1101 08:32:04.186445   21148 cri.go:89] found id: "a4c41d6f050f2ca6af53a5d7a6a54f2b04fb24731eca6d7272b14503b747f50d"
	I1101 08:32:04.186448   21148 cri.go:89] found id: "2b4413f8423a31353523e4d44f7675fac21836f4e3b491f3d3f19955b8251025"
	I1101 08:32:04.186452   21148 cri.go:89] found id: "73d495a359ef08303218d0bd2af8743a68b70af8ffdfadd49ac606f145b559b6"
	I1101 08:32:04.186468   21148 cri.go:89] found id: "18fc9837ab4ea8c07f85c79610c9eda88508e53a37801274e8022d17c69f1a98"
	I1101 08:32:04.186476   21148 cri.go:89] found id: "87757a0f68b4c84bb6b6a0830633d098f646e6d9b6aa521bccfaeb77b540635f"
	I1101 08:32:04.186481   21148 cri.go:89] found id: "f17c2b6b25fbc93674f08548b5309b9715e2cb6d7600ac5d6557505072b3fb3f"
	I1101 08:32:04.186485   21148 cri.go:89] found id: "c60507f296e9571f568638d8fac5f87c704242925cf7b4aa378db809c3e3176b"
	I1101 08:32:04.186488   21148 cri.go:89] found id: "4c1ad1a76dfd8c2d5f52abbb115ac02bc7871e82ffe8eed8b2bce0574ab65ce3"
	I1101 08:32:04.186491   21148 cri.go:89] found id: "808e84f4795d8f4c21c0316fc7e7437f547a0d653508d0419288257452ccf97b"
	I1101 08:32:04.186495   21148 cri.go:89] found id: "d4c72eaef44361ee96a4bd979c97bf60778beaa33b136b1407ada4c192a11240"
	I1101 08:32:04.186498   21148 cri.go:89] found id: "cdda903ada7547082c69c9584210b581ad9bfe62602052de013ddc3c59d043bc"
	I1101 08:32:04.186502   21148 cri.go:89] found id: "b29235edc538375f7d6a03229d64488db46c49997437218731a8cd64edc28444"
	I1101 08:32:04.186507   21148 cri.go:89] found id: ""
	I1101 08:32:04.186579   21148 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:32:04.203074   21148 out.go:203] 
	W1101 08:32:04.204444   21148 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:32:04.204479   21148 out.go:285] * 
	* 
	W1101 08:32:04.209928   21148 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:32:04.211758   21148 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-491859 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.26s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.44s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.452446ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-491859
addons_test.go:332: (dbg) Run:  kubectl --context addons-491859 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-491859 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (252.820357ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:32:08.035213   22224 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:32:08.035354   22224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:08.035362   22224 out.go:374] Setting ErrFile to fd 2...
	I1101 08:32:08.035367   22224 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:08.035558   22224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:32:08.035827   22224 mustload.go:66] Loading cluster: addons-491859
	I1101 08:32:08.036191   22224 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:08.036208   22224 addons.go:607] checking whether the cluster is paused
	I1101 08:32:08.036293   22224 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:08.036305   22224 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:32:08.036640   22224 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:32:08.055507   22224 ssh_runner.go:195] Run: systemctl --version
	I1101 08:32:08.055560   22224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:32:08.074327   22224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:32:08.173620   22224 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:32:08.173686   22224 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:32:08.203451   22224 cri.go:89] found id: "33e4e1fc1e33072f888a46aa17d3beb4e58f11877a9d27925e1e5d968eb6c903"
	I1101 08:32:08.203477   22224 cri.go:89] found id: "f04af0fd3a62dfc9f83c9abac1b7ccb6528a248e3fd3ee02ea1f2a7350778e83"
	I1101 08:32:08.203481   22224 cri.go:89] found id: "a1f3a49b7f394fcf06e3ec79eef028b31fb8f10e2d0269aa4eec27450086f2e9"
	I1101 08:32:08.203486   22224 cri.go:89] found id: "ff4bdf52bbb882d70c007d186f38568cb5286b9a2116e10107044414d1c422b0"
	I1101 08:32:08.203490   22224 cri.go:89] found id: "9d1050c081be96d28152bfd4e229378b4cc1d8c31d74f567fbc905b5e676cbe5"
	I1101 08:32:08.203495   22224 cri.go:89] found id: "2c81dda5dfe97017e1ea451a903bb723503013671dfd4ad2848dbd7ed4c00fda"
	I1101 08:32:08.203499   22224 cri.go:89] found id: "0f17b27c9fb94821e21590f954f59af583f7f28766b74bcf54fd77fd4403631f"
	I1101 08:32:08.203504   22224 cri.go:89] found id: "3070142e889654833dbabc836972d24ca0160e211e6a01dc410037b3d06aa377"
	I1101 08:32:08.203508   22224 cri.go:89] found id: "e0fe6aa919f9f7ec3e5dd5de78f0ba1c29746db4b58ff19fe034196dcb04a040"
	I1101 08:32:08.203527   22224 cri.go:89] found id: "dd32f839b496afac7e54669ede10e44b695513bd1f08cb2572d080421d76ed1f"
	I1101 08:32:08.203535   22224 cri.go:89] found id: "b8dc66998b8c65737a3fc68f94611d5a75e4841817858e50cf8f41fe3d0b9111"
	I1101 08:32:08.203539   22224 cri.go:89] found id: "c4c4e8392feed85ce6d8b52f77463bc2a8238dd093e730bd11ad824f180a3227"
	I1101 08:32:08.203546   22224 cri.go:89] found id: "a4c41d6f050f2ca6af53a5d7a6a54f2b04fb24731eca6d7272b14503b747f50d"
	I1101 08:32:08.203551   22224 cri.go:89] found id: "2b4413f8423a31353523e4d44f7675fac21836f4e3b491f3d3f19955b8251025"
	I1101 08:32:08.203558   22224 cri.go:89] found id: "73d495a359ef08303218d0bd2af8743a68b70af8ffdfadd49ac606f145b559b6"
	I1101 08:32:08.203563   22224 cri.go:89] found id: "18fc9837ab4ea8c07f85c79610c9eda88508e53a37801274e8022d17c69f1a98"
	I1101 08:32:08.203568   22224 cri.go:89] found id: "87757a0f68b4c84bb6b6a0830633d098f646e6d9b6aa521bccfaeb77b540635f"
	I1101 08:32:08.203573   22224 cri.go:89] found id: "f17c2b6b25fbc93674f08548b5309b9715e2cb6d7600ac5d6557505072b3fb3f"
	I1101 08:32:08.203578   22224 cri.go:89] found id: "c60507f296e9571f568638d8fac5f87c704242925cf7b4aa378db809c3e3176b"
	I1101 08:32:08.203589   22224 cri.go:89] found id: "4c1ad1a76dfd8c2d5f52abbb115ac02bc7871e82ffe8eed8b2bce0574ab65ce3"
	I1101 08:32:08.203596   22224 cri.go:89] found id: "808e84f4795d8f4c21c0316fc7e7437f547a0d653508d0419288257452ccf97b"
	I1101 08:32:08.203609   22224 cri.go:89] found id: "d4c72eaef44361ee96a4bd979c97bf60778beaa33b136b1407ada4c192a11240"
	I1101 08:32:08.203616   22224 cri.go:89] found id: "cdda903ada7547082c69c9584210b581ad9bfe62602052de013ddc3c59d043bc"
	I1101 08:32:08.203620   22224 cri.go:89] found id: "b29235edc538375f7d6a03229d64488db46c49997437218731a8cd64edc28444"
	I1101 08:32:08.203627   22224 cri.go:89] found id: ""
	I1101 08:32:08.203672   22224 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:32:08.218515   22224 out.go:203] 
	W1101 08:32:08.219674   22224 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:32:08.219700   22224 out.go:285] * 
	* 
	W1101 08:32:08.222805   22224 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:32:08.224141   22224 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-491859 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.44s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (148.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-491859 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-491859 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-491859 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [68753463-f0d3-4cff-91c6-d29cfe38d92f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [68753463-f0d3-4cff-91c6-d29cfe38d92f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.002999932s
I1101 08:32:13.681771    9414 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-491859 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.929434447s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-491859 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-491859
helpers_test.go:243: (dbg) docker inspect addons-491859:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "227e011c4d635e86a7d98338cfbc60ccc8e82d06e889105c06607437284225aa",
	        "Created": "2025-11-01T08:29:42.519506733Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11399,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T08:29:42.556344662Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/227e011c4d635e86a7d98338cfbc60ccc8e82d06e889105c06607437284225aa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/227e011c4d635e86a7d98338cfbc60ccc8e82d06e889105c06607437284225aa/hostname",
	        "HostsPath": "/var/lib/docker/containers/227e011c4d635e86a7d98338cfbc60ccc8e82d06e889105c06607437284225aa/hosts",
	        "LogPath": "/var/lib/docker/containers/227e011c4d635e86a7d98338cfbc60ccc8e82d06e889105c06607437284225aa/227e011c4d635e86a7d98338cfbc60ccc8e82d06e889105c06607437284225aa-json.log",
	        "Name": "/addons-491859",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-491859:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-491859",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "227e011c4d635e86a7d98338cfbc60ccc8e82d06e889105c06607437284225aa",
	                "LowerDir": "/var/lib/docker/overlay2/10ecdaf89aff152dafb69a1872c98f770f95a3e681dcd3228c2161ebabf3576e-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/10ecdaf89aff152dafb69a1872c98f770f95a3e681dcd3228c2161ebabf3576e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/10ecdaf89aff152dafb69a1872c98f770f95a3e681dcd3228c2161ebabf3576e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/10ecdaf89aff152dafb69a1872c98f770f95a3e681dcd3228c2161ebabf3576e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-491859",
	                "Source": "/var/lib/docker/volumes/addons-491859/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-491859",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-491859",
	                "name.minikube.sigs.k8s.io": "addons-491859",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "779f67121a96e6e16b406069fd940327ec18a62b70819f756c12dfbb3b10eed1",
	            "SandboxKey": "/var/run/docker/netns/779f67121a96",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-491859": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:09:99:0c:7d:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a483d68a9f1df4d48125180490b80e16279db89452ff7e0302439e525714351",
	                    "EndpointID": "46e97eb58d018377da131eed143ec60fb3017b97553f644bd343d2d30f74a16d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-491859",
	                        "227e011c4d63"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-491859 -n addons-491859
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-491859 logs -n 25: (1.192284241s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-218549 --alsologtostderr --binary-mirror http://127.0.0.1:39227 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-218549 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-218549                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-218549 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ addons  │ enable dashboard -p addons-491859                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-491859                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ start   │ -p addons-491859 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:31 UTC │
	│ addons  │ addons-491859 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-491859 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-491859 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-491859 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-491859 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:31 UTC │                     │
	│ ssh     │ addons-491859 ssh cat /opt/local-path-provisioner/pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:31 UTC │ 01 Nov 25 08:31 UTC │
	│ addons  │ addons-491859 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-491859 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-491859 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │                     │
	│ ip      │ addons-491859 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
	│ addons  │ addons-491859 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │                     │
	│ addons  │ addons-491859 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │                     │
	│ addons  │ addons-491859 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-491859                                                                                                                                                                                                                                                                                                                                                                                           │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
	│ addons  │ addons-491859 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │                     │
	│ addons  │ addons-491859 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │                     │
	│ ssh     │ addons-491859 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │                     │
	│ addons  │ addons-491859 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │                     │
	│ addons  │ addons-491859 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │                     │
	│ ip      │ addons-491859 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-491859        │ jenkins │ v1.37.0 │ 01 Nov 25 08:34 UTC │ 01 Nov 25 08:34 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:29:18
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:29:18.395686   10731 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:29:18.395821   10731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:18.395832   10731 out.go:374] Setting ErrFile to fd 2...
	I1101 08:29:18.395836   10731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:18.396084   10731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:29:18.396625   10731 out.go:368] Setting JSON to false
	I1101 08:29:18.397562   10731 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":706,"bootTime":1761985052,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:29:18.397656   10731 start.go:143] virtualization: kvm guest
	I1101 08:29:18.399672   10731 out.go:179] * [addons-491859] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 08:29:18.401439   10731 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 08:29:18.401488   10731 notify.go:221] Checking for updates...
	I1101 08:29:18.404241   10731 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:29:18.405465   10731 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 08:29:18.406814   10731 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 08:29:18.408143   10731 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 08:29:18.409539   10731 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:29:18.411402   10731 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:29:18.436678   10731 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 08:29:18.436815   10731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:18.491880   10731 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-01 08:29:18.482006307 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:29:18.492017   10731 docker.go:319] overlay module found
	I1101 08:29:18.494000   10731 out.go:179] * Using the docker driver based on user configuration
	I1101 08:29:18.495094   10731 start.go:309] selected driver: docker
	I1101 08:29:18.495110   10731 start.go:930] validating driver "docker" against <nil>
	I1101 08:29:18.495122   10731 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:29:18.495681   10731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:18.553706   10731 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-01 08:29:18.544541725 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:29:18.553848   10731 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:29:18.554100   10731 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:29:18.555995   10731 out.go:179] * Using Docker driver with root privileges
	I1101 08:29:18.557517   10731 cni.go:84] Creating CNI manager for ""
	I1101 08:29:18.557586   10731 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:29:18.557596   10731 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 08:29:18.557666   10731 start.go:353] cluster config:
	{Name:addons-491859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-491859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1101 08:29:18.559147   10731 out.go:179] * Starting "addons-491859" primary control-plane node in "addons-491859" cluster
	I1101 08:29:18.560143   10731 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 08:29:18.561413   10731 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 08:29:18.562561   10731 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:29:18.562598   10731 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 08:29:18.562609   10731 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 08:29:18.562621   10731 cache.go:59] Caching tarball of preloaded images
	I1101 08:29:18.562706   10731 preload.go:233] Found /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 08:29:18.562719   10731 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 08:29:18.563075   10731 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/config.json ...
	I1101 08:29:18.563105   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/config.json: {Name:mke52046cdc175d21920b9af0bb0df87c10485c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:18.580276   10731 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 08:29:18.580409   10731 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 08:29:18.580430   10731 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 08:29:18.580437   10731 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 08:29:18.580449   10731 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 08:29:18.580456   10731 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1101 08:29:30.886343   10731 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1101 08:29:30.886382   10731 cache.go:233] Successfully downloaded all kic artifacts
	I1101 08:29:30.886414   10731 start.go:360] acquireMachinesLock for addons-491859: {Name:mk68f33aa39dc4a1fa1cf6d283fdb1adb54191e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 08:29:30.886530   10731 start.go:364] duration metric: took 89.954µs to acquireMachinesLock for "addons-491859"
	I1101 08:29:30.886555   10731 start.go:93] Provisioning new machine with config: &{Name:addons-491859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-491859 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:29:30.886624   10731 start.go:125] createHost starting for "" (driver="docker")
	I1101 08:29:30.888467   10731 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1101 08:29:30.888693   10731 start.go:159] libmachine.API.Create for "addons-491859" (driver="docker")
	I1101 08:29:30.888723   10731 client.go:173] LocalClient.Create starting
	I1101 08:29:30.888847   10731 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem
	I1101 08:29:31.180353   10731 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem
	I1101 08:29:31.235331   10731 cli_runner.go:164] Run: docker network inspect addons-491859 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 08:29:31.252265   10731 cli_runner.go:211] docker network inspect addons-491859 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 08:29:31.252359   10731 network_create.go:284] running [docker network inspect addons-491859] to gather additional debugging logs...
	I1101 08:29:31.252378   10731 cli_runner.go:164] Run: docker network inspect addons-491859
	W1101 08:29:31.269367   10731 cli_runner.go:211] docker network inspect addons-491859 returned with exit code 1
	I1101 08:29:31.269400   10731 network_create.go:287] error running [docker network inspect addons-491859]: docker network inspect addons-491859: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-491859 not found
	I1101 08:29:31.269411   10731 network_create.go:289] output of [docker network inspect addons-491859]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-491859 not found
	
	** /stderr **
	I1101 08:29:31.269518   10731 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 08:29:31.286815   10731 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016f7d10}
	I1101 08:29:31.286880   10731 network_create.go:124] attempt to create docker network addons-491859 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 08:29:31.286933   10731 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-491859 addons-491859
	I1101 08:29:31.346236   10731 network_create.go:108] docker network addons-491859 192.168.49.0/24 created
	I1101 08:29:31.346287   10731 kic.go:121] calculated static IP "192.168.49.2" for the "addons-491859" container
	I1101 08:29:31.346356   10731 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 08:29:31.364286   10731 cli_runner.go:164] Run: docker volume create addons-491859 --label name.minikube.sigs.k8s.io=addons-491859 --label created_by.minikube.sigs.k8s.io=true
	I1101 08:29:31.382821   10731 oci.go:103] Successfully created a docker volume addons-491859
	I1101 08:29:31.382916   10731 cli_runner.go:164] Run: docker run --rm --name addons-491859-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-491859 --entrypoint /usr/bin/test -v addons-491859:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 08:29:38.032982   10731 cli_runner.go:217] Completed: docker run --rm --name addons-491859-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-491859 --entrypoint /usr/bin/test -v addons-491859:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (6.650026929s)
	I1101 08:29:38.033010   10731 oci.go:107] Successfully prepared a docker volume addons-491859
	I1101 08:29:38.033029   10731 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:29:38.033048   10731 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 08:29:38.033126   10731 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-491859:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 08:29:42.448129   10731 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-491859:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.414960684s)
	I1101 08:29:42.448157   10731 kic.go:203] duration metric: took 4.415105637s to extract preloaded images to volume ...
	W1101 08:29:42.448275   10731 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 08:29:42.448309   10731 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 08:29:42.448352   10731 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 08:29:42.503446   10731 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-491859 --name addons-491859 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-491859 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-491859 --network addons-491859 --ip 192.168.49.2 --volume addons-491859:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 08:29:42.814516   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Running}}
	I1101 08:29:42.835696   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:29:42.854515   10731 cli_runner.go:164] Run: docker exec addons-491859 stat /var/lib/dpkg/alternatives/iptables
	I1101 08:29:42.909716   10731 oci.go:144] the created container "addons-491859" has a running status.
	I1101 08:29:42.909786   10731 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa...
	I1101 08:29:43.135081   10731 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 08:29:43.170376   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:29:43.191335   10731 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 08:29:43.191357   10731 kic_runner.go:114] Args: [docker exec --privileged addons-491859 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 08:29:43.240422   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:29:43.261938   10731 machine.go:94] provisionDockerMachine start ...
	I1101 08:29:43.262057   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:29:43.281484   10731 main.go:143] libmachine: Using SSH client type: native
	I1101 08:29:43.281775   10731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 08:29:43.281794   10731 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 08:29:43.425747   10731 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-491859
	
	I1101 08:29:43.425777   10731 ubuntu.go:182] provisioning hostname "addons-491859"
	I1101 08:29:43.425836   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:29:43.444569   10731 main.go:143] libmachine: Using SSH client type: native
	I1101 08:29:43.444850   10731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 08:29:43.444886   10731 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-491859 && echo "addons-491859" | sudo tee /etc/hostname
	I1101 08:29:43.597299   10731 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-491859
	
	I1101 08:29:43.597387   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:29:43.615147   10731 main.go:143] libmachine: Using SSH client type: native
	I1101 08:29:43.615387   10731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 08:29:43.615407   10731 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-491859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-491859/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-491859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 08:29:43.756412   10731 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 08:29:43.756438   10731 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5913/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5913/.minikube}
	I1101 08:29:43.756480   10731 ubuntu.go:190] setting up certificates
	I1101 08:29:43.756494   10731 provision.go:84] configureAuth start
	I1101 08:29:43.756548   10731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-491859
	I1101 08:29:43.774542   10731 provision.go:143] copyHostCerts
	I1101 08:29:43.774626   10731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem (1078 bytes)
	I1101 08:29:43.774741   10731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem (1123 bytes)
	I1101 08:29:43.774803   10731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem (1675 bytes)
	I1101 08:29:43.774855   10731 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem org=jenkins.addons-491859 san=[127.0.0.1 192.168.49.2 addons-491859 localhost minikube]
	I1101 08:29:44.000398   10731 provision.go:177] copyRemoteCerts
	I1101 08:29:44.000455   10731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 08:29:44.000491   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:29:44.018425   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:29:44.119221   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 08:29:44.138425   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 08:29:44.156400   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 08:29:44.174359   10731 provision.go:87] duration metric: took 417.851925ms to configureAuth
	I1101 08:29:44.174388   10731 ubuntu.go:206] setting minikube options for container-runtime
	I1101 08:29:44.174582   10731 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:29:44.174696   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:29:44.192685   10731 main.go:143] libmachine: Using SSH client type: native
	I1101 08:29:44.192995   10731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 08:29:44.193022   10731 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 08:29:44.447606   10731 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 08:29:44.447631   10731 machine.go:97] duration metric: took 1.185659743s to provisionDockerMachine
	I1101 08:29:44.447644   10731 client.go:176] duration metric: took 13.558912809s to LocalClient.Create
	I1101 08:29:44.447671   10731 start.go:167] duration metric: took 13.558978318s to libmachine.API.Create "addons-491859"
	I1101 08:29:44.447680   10731 start.go:293] postStartSetup for "addons-491859" (driver="docker")
	I1101 08:29:44.447693   10731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 08:29:44.447752   10731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 08:29:44.447791   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:29:44.465841   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:29:44.568433   10731 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 08:29:44.572471   10731 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 08:29:44.572498   10731 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 08:29:44.572521   10731 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/addons for local assets ...
	I1101 08:29:44.572588   10731 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/files for local assets ...
	I1101 08:29:44.572614   10731 start.go:296] duration metric: took 124.92709ms for postStartSetup
	I1101 08:29:44.572955   10731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-491859
	I1101 08:29:44.591464   10731 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/config.json ...
	I1101 08:29:44.591728   10731 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:29:44.591766   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:29:44.611217   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:29:44.709102   10731 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 08:29:44.713766   10731 start.go:128] duration metric: took 13.827130547s to createHost
	I1101 08:29:44.713794   10731 start.go:83] releasing machines lock for "addons-491859", held for 13.827250706s
	I1101 08:29:44.713882   10731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-491859
	I1101 08:29:44.733704   10731 ssh_runner.go:195] Run: cat /version.json
	I1101 08:29:44.733759   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:29:44.733786   10731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 08:29:44.733841   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:29:44.753676   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:29:44.754517   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:29:44.849952   10731 ssh_runner.go:195] Run: systemctl --version
	I1101 08:29:44.904440   10731 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 08:29:44.941511   10731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 08:29:44.946240   10731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 08:29:44.946308   10731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 08:29:44.972407   10731 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 08:29:44.972434   10731 start.go:496] detecting cgroup driver to use...
	I1101 08:29:44.972462   10731 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 08:29:44.972500   10731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 08:29:44.988367   10731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 08:29:45.001094   10731 docker.go:218] disabling cri-docker service (if available) ...
	I1101 08:29:45.001157   10731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 08:29:45.017747   10731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 08:29:45.035550   10731 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 08:29:45.113377   10731 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 08:29:45.198918   10731 docker.go:234] disabling docker service ...
	I1101 08:29:45.198974   10731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 08:29:45.217439   10731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 08:29:45.230101   10731 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 08:29:45.312169   10731 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 08:29:45.393451   10731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 08:29:45.406098   10731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 08:29:45.420630   10731 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 08:29:45.420694   10731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:45.431355   10731 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 08:29:45.431426   10731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:45.440760   10731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:45.449985   10731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:45.459096   10731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 08:29:45.467588   10731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:45.476897   10731 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:45.490608   10731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:45.499210   10731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 08:29:45.506911   10731 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 08:29:45.506971   10731 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 08:29:45.519650   10731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 08:29:45.527667   10731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:29:45.606421   10731 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 08:29:45.708452   10731 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 08:29:45.708537   10731 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 08:29:45.712592   10731 start.go:564] Will wait 60s for crictl version
	I1101 08:29:45.712643   10731 ssh_runner.go:195] Run: which crictl
	I1101 08:29:45.716286   10731 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 08:29:45.741302   10731 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 08:29:45.741413   10731 ssh_runner.go:195] Run: crio --version
	I1101 08:29:45.768032   10731 ssh_runner.go:195] Run: crio --version
	I1101 08:29:45.798083   10731 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 08:29:45.799228   10731 cli_runner.go:164] Run: docker network inspect addons-491859 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 08:29:45.816845   10731 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 08:29:45.821062   10731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:29:45.831675   10731 kubeadm.go:884] updating cluster {Name:addons-491859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-491859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 08:29:45.831843   10731 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:29:45.831917   10731 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:29:45.865293   10731 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 08:29:45.865315   10731 crio.go:433] Images already preloaded, skipping extraction
	I1101 08:29:45.865364   10731 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:29:45.891285   10731 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 08:29:45.891306   10731 cache_images.go:86] Images are preloaded, skipping loading
	I1101 08:29:45.891315   10731 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 08:29:45.891413   10731 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-491859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-491859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 08:29:45.891486   10731 ssh_runner.go:195] Run: crio config
	I1101 08:29:45.936476   10731 cni.go:84] Creating CNI manager for ""
	I1101 08:29:45.936502   10731 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:29:45.936523   10731 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 08:29:45.936544   10731 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-491859 NodeName:addons-491859 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 08:29:45.936665   10731 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-491859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 08:29:45.936725   10731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 08:29:45.945444   10731 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 08:29:45.945521   10731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 08:29:45.953729   10731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 08:29:45.967053   10731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 08:29:45.983566   10731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1101 08:29:45.997069   10731 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 08:29:46.000903   10731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:29:46.011598   10731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:29:46.091901   10731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:29:46.116472   10731 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859 for IP: 192.168.49.2
	I1101 08:29:46.116499   10731 certs.go:195] generating shared ca certs ...
	I1101 08:29:46.116515   10731 certs.go:227] acquiring lock for ca certs: {Name:mkfdee6a84670347521013ebeef165551380cb9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:46.116646   10731 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key
	I1101 08:29:46.259033   10731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt ...
	I1101 08:29:46.259063   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt: {Name:mk4bf3995d5d0f4fef38f99e080776cf96bc48cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:46.259225   10731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key ...
	I1101 08:29:46.259236   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key: {Name:mkd1f675dd286f2d5b71c8b39a4614cd145027a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:46.259325   10731 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key
	I1101 08:29:46.470101   10731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.crt ...
	I1101 08:29:46.470136   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.crt: {Name:mk4d5edb6e3284aedb960a5d17b6874006117575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:46.470312   10731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key ...
	I1101 08:29:46.470322   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key: {Name:mkdbbe1554f606cb64b651fbfe7fb2d808191132 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:46.470397   10731 certs.go:257] generating profile certs ...
	I1101 08:29:46.470482   10731 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.key
	I1101 08:29:46.470506   10731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt with IP's: []
	I1101 08:29:46.631245   10731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt ...
	I1101 08:29:46.631280   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: {Name:mkad9c6537b618eb28e78c59039c41f01bf0b157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:46.631456   10731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.key ...
	I1101 08:29:46.631467   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.key: {Name:mk71bc388ac6118a79f3338cab825b3d9b05a13f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:46.631543   10731 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.key.54d41853
	I1101 08:29:46.631561   10731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.crt.54d41853 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1101 08:29:47.056564   10731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.crt.54d41853 ...
	I1101 08:29:47.056601   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.crt.54d41853: {Name:mka7b039c670b14a7a31317583752fd87a0fd045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:47.056772   10731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.key.54d41853 ...
	I1101 08:29:47.056785   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.key.54d41853: {Name:mke7fad54183e66065994d5454419195014552ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:47.056878   10731 certs.go:382] copying /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.crt.54d41853 -> /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.crt
	I1101 08:29:47.056960   10731 certs.go:386] copying /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.key.54d41853 -> /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.key
	I1101 08:29:47.057010   10731 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/proxy-client.key
	I1101 08:29:47.057029   10731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/proxy-client.crt with IP's: []
	I1101 08:29:47.316919   10731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/proxy-client.crt ...
	I1101 08:29:47.316951   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/proxy-client.crt: {Name:mk04669aab90e96b4612effdbd0c5217954f9ad6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:47.317125   10731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/proxy-client.key ...
	I1101 08:29:47.317137   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/proxy-client.key: {Name:mk921078f929be7c707b6c61cfb161c2d07cd92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:47.317339   10731 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 08:29:47.317377   10731 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem (1078 bytes)
	I1101 08:29:47.317397   10731 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem (1123 bytes)
	I1101 08:29:47.317415   10731 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem (1675 bytes)
	I1101 08:29:47.317987   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 08:29:47.336457   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 08:29:47.354326   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 08:29:47.372213   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 08:29:47.390015   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 08:29:47.408600   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 08:29:47.427246   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 08:29:47.445950   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 08:29:47.464227   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 08:29:47.484996   10731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 08:29:47.498119   10731 ssh_runner.go:195] Run: openssl version
	I1101 08:29:47.504350   10731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 08:29:47.515795   10731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:29:47.519757   10731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:29:47.519849   10731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:29:47.553830   10731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 08:29:47.563614   10731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 08:29:47.567597   10731 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 08:29:47.567662   10731 kubeadm.go:401] StartCluster: {Name:addons-491859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-491859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:29:47.567745   10731 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:29:47.567795   10731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:29:47.596594   10731 cri.go:89] found id: ""
	I1101 08:29:47.596673   10731 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 08:29:47.605319   10731 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 08:29:47.613740   10731 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 08:29:47.613791   10731 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 08:29:47.622171   10731 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 08:29:47.622204   10731 kubeadm.go:158] found existing configuration files:
	
	I1101 08:29:47.622253   10731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 08:29:47.630497   10731 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 08:29:47.630562   10731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 08:29:47.638605   10731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 08:29:47.646770   10731 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 08:29:47.646828   10731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 08:29:47.654629   10731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 08:29:47.662566   10731 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 08:29:47.662631   10731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 08:29:47.670809   10731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 08:29:47.679951   10731 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 08:29:47.680031   10731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 08:29:47.688660   10731 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 08:29:47.726643   10731 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 08:29:47.726697   10731 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 08:29:47.748533   10731 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 08:29:47.748608   10731 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 08:29:47.748670   10731 kubeadm.go:319] OS: Linux
	I1101 08:29:47.748756   10731 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 08:29:47.748815   10731 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 08:29:47.748859   10731 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 08:29:47.748936   10731 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 08:29:47.748982   10731 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 08:29:47.749023   10731 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 08:29:47.749097   10731 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 08:29:47.749163   10731 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 08:29:47.805000   10731 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 08:29:47.805091   10731 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 08:29:47.805196   10731 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 08:29:47.811990   10731 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 08:29:47.813963   10731 out.go:252]   - Generating certificates and keys ...
	I1101 08:29:47.814047   10731 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 08:29:47.814148   10731 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 08:29:47.942661   10731 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 08:29:48.258914   10731 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 08:29:48.892215   10731 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 08:29:49.143224   10731 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 08:29:49.531503   10731 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 08:29:49.531665   10731 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-491859 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 08:29:49.879321   10731 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 08:29:49.879473   10731 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-491859 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 08:29:50.262434   10731 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 08:29:50.378287   10731 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 08:29:50.604682   10731 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 08:29:50.604768   10731 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 08:29:51.288241   10731 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 08:29:51.427432   10731 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 08:29:51.661821   10731 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 08:29:51.718850   10731 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 08:29:51.976623   10731 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 08:29:51.977051   10731 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 08:29:51.980988   10731 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 08:29:51.982657   10731 out.go:252]   - Booting up control plane ...
	I1101 08:29:51.982752   10731 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 08:29:51.982825   10731 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 08:29:51.983140   10731 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 08:29:51.996841   10731 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 08:29:51.996985   10731 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 08:29:52.004102   10731 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 08:29:52.005057   10731 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 08:29:52.005151   10731 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 08:29:52.102302   10731 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 08:29:52.102463   10731 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 08:29:53.103073   10731 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000895763s
	I1101 08:29:53.105977   10731 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 08:29:53.106121   10731 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1101 08:29:53.106227   10731 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 08:29:53.106304   10731 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 08:29:54.660126   10731 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.55416674s
	I1101 08:29:56.077706   10731 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.971638524s
	I1101 08:29:56.607587   10731 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501591806s
	I1101 08:29:56.618695   10731 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 08:29:56.628394   10731 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 08:29:56.637122   10731 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 08:29:56.637356   10731 kubeadm.go:319] [mark-control-plane] Marking the node addons-491859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 08:29:56.644796   10731 kubeadm.go:319] [bootstrap-token] Using token: wo1v43.v1n7lssssb2gwy0c
	I1101 08:29:56.646016   10731 out.go:252]   - Configuring RBAC rules ...
	I1101 08:29:56.646176   10731 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 08:29:56.651320   10731 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 08:29:56.656416   10731 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 08:29:56.658758   10731 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 08:29:56.661186   10731 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 08:29:56.664533   10731 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 08:29:57.013451   10731 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 08:29:57.429805   10731 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 08:29:58.014521   10731 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 08:29:58.015314   10731 kubeadm.go:319] 
	I1101 08:29:58.015376   10731 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 08:29:58.015408   10731 kubeadm.go:319] 
	I1101 08:29:58.015596   10731 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 08:29:58.015620   10731 kubeadm.go:319] 
	I1101 08:29:58.015660   10731 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 08:29:58.015737   10731 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 08:29:58.015815   10731 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 08:29:58.015825   10731 kubeadm.go:319] 
	I1101 08:29:58.015944   10731 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 08:29:58.015964   10731 kubeadm.go:319] 
	I1101 08:29:58.016044   10731 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 08:29:58.016054   10731 kubeadm.go:319] 
	I1101 08:29:58.016128   10731 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 08:29:58.016250   10731 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 08:29:58.016340   10731 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 08:29:58.016351   10731 kubeadm.go:319] 
	I1101 08:29:58.016479   10731 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 08:29:58.016588   10731 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 08:29:58.016622   10731 kubeadm.go:319] 
	I1101 08:29:58.016759   10731 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token wo1v43.v1n7lssssb2gwy0c \
	I1101 08:29:58.016934   10731 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 \
	I1101 08:29:58.016967   10731 kubeadm.go:319] 	--control-plane 
	I1101 08:29:58.016991   10731 kubeadm.go:319] 
	I1101 08:29:58.017124   10731 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 08:29:58.017138   10731 kubeadm.go:319] 
	I1101 08:29:58.017265   10731 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token wo1v43.v1n7lssssb2gwy0c \
	I1101 08:29:58.017402   10731 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 
	I1101 08:29:58.019000   10731 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 08:29:58.019147   10731 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 08:29:58.019180   10731 cni.go:84] Creating CNI manager for ""
	I1101 08:29:58.019194   10731 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:29:58.020819   10731 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 08:29:58.022079   10731 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 08:29:58.026366   10731 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 08:29:58.026382   10731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 08:29:58.039818   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 08:29:58.239103   10731 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 08:29:58.239193   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:29:58.239219   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-491859 minikube.k8s.io/updated_at=2025_11_01T08_29_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=addons-491859 minikube.k8s.io/primary=true
	I1101 08:29:58.314884   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:29:58.314938   10731 ops.go:34] apiserver oom_adj: -16
	I1101 08:29:58.815245   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:29:59.315706   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:29:59.815551   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:00.315568   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:00.816002   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:01.315085   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:01.815408   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:02.315961   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:02.815934   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:03.315654   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:03.390965   10731 kubeadm.go:1114] duration metric: took 5.151843777s to wait for elevateKubeSystemPrivileges
	I1101 08:30:03.391002   10731 kubeadm.go:403] duration metric: took 15.823344629s to StartCluster
	I1101 08:30:03.391022   10731 settings.go:142] acquiring lock: {Name:mkb1ba7d0d4bb15f3f0746ce486d72703f901580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:03.391147   10731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 08:30:03.391707   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:03.391948   10731 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 08:30:03.392021   10731 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:30:03.392053   10731 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 08:30:03.392190   10731 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:30:03.392229   10731 addons.go:70] Setting default-storageclass=true in profile "addons-491859"
	I1101 08:30:03.392231   10731 addons.go:70] Setting yakd=true in profile "addons-491859"
	I1101 08:30:03.392242   10731 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-491859"
	I1101 08:30:03.392253   10731 addons.go:239] Setting addon yakd=true in "addons-491859"
	I1101 08:30:03.392275   10731 addons.go:70] Setting gcp-auth=true in profile "addons-491859"
	I1101 08:30:03.392307   10731 mustload.go:66] Loading cluster: addons-491859
	I1101 08:30:03.392308   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.392552   10731 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:30:03.392682   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.392812   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.392809   10731 addons.go:70] Setting registry=true in profile "addons-491859"
	I1101 08:30:03.392886   10731 addons.go:239] Setting addon registry=true in "addons-491859"
	I1101 08:30:03.392901   10731 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-491859"
	I1101 08:30:03.392915   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.392923   10731 addons.go:70] Setting cloud-spanner=true in profile "addons-491859"
	I1101 08:30:03.392933   10731 addons.go:239] Setting addon cloud-spanner=true in "addons-491859"
	I1101 08:30:03.392948   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.392958   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.393216   10731 addons.go:70] Setting inspektor-gadget=true in profile "addons-491859"
	I1101 08:30:03.393251   10731 addons.go:239] Setting addon inspektor-gadget=true in "addons-491859"
	I1101 08:30:03.393276   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.393509   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.393567   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.393750   10731 addons.go:70] Setting volcano=true in profile "addons-491859"
	I1101 08:30:03.393844   10731 addons.go:239] Setting addon volcano=true in "addons-491859"
	I1101 08:30:03.393940   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.394111   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.394308   10731 addons.go:70] Setting metrics-server=true in profile "addons-491859"
	I1101 08:30:03.394327   10731 addons.go:239] Setting addon metrics-server=true in "addons-491859"
	I1101 08:30:03.394349   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.394639   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.394807   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.395199   10731 addons.go:70] Setting volumesnapshots=true in profile "addons-491859"
	I1101 08:30:03.395224   10731 addons.go:239] Setting addon volumesnapshots=true in "addons-491859"
	I1101 08:30:03.395250   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.392917   10731 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-491859"
	I1101 08:30:03.396651   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.397196   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.397600   10731 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-491859"
	I1101 08:30:03.397618   10731 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-491859"
	I1101 08:30:03.397650   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.398137   10731 addons.go:70] Setting ingress=true in profile "addons-491859"
	I1101 08:30:03.398200   10731 addons.go:239] Setting addon ingress=true in "addons-491859"
	I1101 08:30:03.398272   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.398958   10731 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-491859"
	I1101 08:30:03.398993   10731 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-491859"
	I1101 08:30:03.399360   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.399985   10731 addons.go:70] Setting ingress-dns=true in profile "addons-491859"
	I1101 08:30:03.400057   10731 addons.go:239] Setting addon ingress-dns=true in "addons-491859"
	I1101 08:30:03.400100   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.400435   10731 out.go:179] * Verifying Kubernetes components...
	I1101 08:30:03.401503   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.400500   10731 addons.go:70] Setting storage-provisioner=true in profile "addons-491859"
	I1101 08:30:03.401684   10731 addons.go:239] Setting addon storage-provisioner=true in "addons-491859"
	I1101 08:30:03.401716   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.402151   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.400522   10731 addons.go:70] Setting registry-creds=true in profile "addons-491859"
	I1101 08:30:03.402347   10731 addons.go:239] Setting addon registry-creds=true in "addons-491859"
	I1101 08:30:03.402391   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.392186   10731 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-491859"
	I1101 08:30:03.402484   10731 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-491859"
	I1101 08:30:03.402503   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.403980   10731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:30:03.406655   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.407485   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.407543   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.408313   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.409111   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.439792   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.451368   10731 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 08:30:03.452570   10731 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 08:30:03.452596   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 08:30:03.452667   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.469293   10731 addons.go:239] Setting addon default-storageclass=true in "addons-491859"
	I1101 08:30:03.469425   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.469784   10731 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 08:30:03.470722   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.471012   10731 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 08:30:03.471090   10731 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 08:30:03.471208   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	W1101 08:30:03.488274   10731 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 08:30:03.495847   10731 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 08:30:03.497296   10731 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:30:03.497321   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 08:30:03.497326   10731 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 08:30:03.497393   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.498605   10731 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 08:30:03.498891   10731 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 08:30:03.499023   10731 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 08:30:03.499900   10731 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 08:30:03.499915   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 08:30:03.500008   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.500578   10731 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 08:30:03.500592   10731 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 08:30:03.500675   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.501025   10731 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:30:03.501060   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 08:30:03.501115   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.507713   10731 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-491859"
	I1101 08:30:03.507764   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.508291   10731 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 08:30:03.508562   10731 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 08:30:03.509542   10731 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 08:30:03.509638   10731 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:30:03.509649   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 08:30:03.509705   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.510112   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.521910   10731 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 08:30:03.525239   10731 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 08:30:03.525270   10731 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 08:30:03.526481   10731 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 08:30:03.526541   10731 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:30:03.527910   10731 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 08:30:03.529554   10731 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 08:30:03.529688   10731 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:30:03.529816   10731 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:30:03.529830   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 08:30:03.529918   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.531367   10731 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:30:03.531385   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 08:30:03.531438   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.531850   10731 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 08:30:03.532990   10731 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 08:30:03.534068   10731 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 08:30:03.534087   10731 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 08:30:03.534148   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.546403   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.547268   10731 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 08:30:03.547277   10731 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 08:30:03.548467   10731 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 08:30:03.548515   10731 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 08:30:03.548584   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.550060   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.551275   10731 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 08:30:03.551294   10731 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 08:30:03.551371   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.552134   10731 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 08:30:03.552150   10731 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 08:30:03.552208   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.558147   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.562741   10731 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 08:30:03.570563   10731 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:30:03.570585   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 08:30:03.570651   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.571001   10731 out.go:179]   - Using image docker.io/busybox:stable
	I1101 08:30:03.572980   10731 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 08:30:03.574420   10731 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:30:03.574438   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 08:30:03.574504   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.594336   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.597748   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.599494   10731 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 08:30:03.599137   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.603413   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.606503   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.606963   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.617302   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.620094   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.626000   10731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:30:03.626653   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.633939   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.640221   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.649782   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	W1101 08:30:03.649815   10731 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 08:30:03.649874   10731 retry.go:31] will retry after 372.854236ms: ssh: handshake failed: EOF
	I1101 08:30:03.732765   10731 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 08:30:03.732790   10731 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 08:30:03.744011   10731 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:03.744051   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 08:30:03.750706   10731 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 08:30:03.750729   10731 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 08:30:03.750911   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:30:03.772429   10731 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 08:30:03.772460   10731 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 08:30:03.776339   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:03.784021   10731 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 08:30:03.784051   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 08:30:03.788051   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:30:03.789803   10731 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 08:30:03.789838   10731 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 08:30:03.792023   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:30:03.793099   10731 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 08:30:03.793122   10731 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 08:30:03.794618   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 08:30:03.811703   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:30:03.811842   10731 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 08:30:03.811855   10731 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 08:30:03.812454   10731 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 08:30:03.812508   10731 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 08:30:03.818716   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:30:03.825949   10731 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:30:03.825975   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 08:30:03.829212   10731 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 08:30:03.829242   10731 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 08:30:03.834190   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:30:03.837859   10731 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:30:03.837904   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 08:30:03.842320   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 08:30:03.860421   10731 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 08:30:03.860520   10731 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 08:30:03.863770   10731 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 08:30:03.863792   10731 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 08:30:03.880009   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:30:03.885813   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:30:03.892961   10731 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 08:30:03.893059   10731 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 08:30:03.896066   10731 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:30:03.896094   10731 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 08:30:03.917359   10731 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 08:30:03.917393   10731 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 08:30:03.936348   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:30:03.954489   10731 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 08:30:03.954536   10731 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 08:30:03.975980   10731 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:30:03.976005   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 08:30:03.978702   10731 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 08:30:03.979819   10731 node_ready.go:35] waiting up to 6m0s for node "addons-491859" to be "Ready" ...
	I1101 08:30:04.034056   10731 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 08:30:04.034128   10731 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 08:30:04.044135   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:30:04.114761   10731 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 08:30:04.114791   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 08:30:04.213240   10731 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 08:30:04.213272   10731 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 08:30:04.262549   10731 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 08:30:04.262582   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 08:30:04.306479   10731 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 08:30:04.306527   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 08:30:04.345132   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:30:04.369379   10731 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:30:04.369413   10731 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 08:30:04.425385   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:30:04.484618   10731 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-491859" context rescaled to 1 replicas
	I1101 08:30:04.794803   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.018416231s)
	W1101 08:30:04.794938   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:04.795028   10731 retry.go:31] will retry after 284.875547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:04.796727   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.008636924s)
	I1101 08:30:04.797163   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.005108247s)
	I1101 08:30:04.797213   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.002575723s)
	I1101 08:30:05.023052   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.188818769s)
	I1101 08:30:05.023105   10731 addons.go:480] Verifying addon ingress=true in "addons-491859"
	I1101 08:30:05.023233   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.143190692s)
	I1101 08:30:05.023171   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.180804805s)
	I1101 08:30:05.023328   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.13722538s)
	I1101 08:30:05.023348   10731 addons.go:480] Verifying addon registry=true in "addons-491859"
	I1101 08:30:05.023463   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.087084333s)
	I1101 08:30:05.024408   10731 addons.go:480] Verifying addon metrics-server=true in "addons-491859"
	I1101 08:30:05.024795   10731 out.go:179] * Verifying ingress addon...
	I1101 08:30:05.024794   10731 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-491859 service yakd-dashboard -n yakd-dashboard
	
	I1101 08:30:05.025511   10731 out.go:179] * Verifying registry addon...
	I1101 08:30:05.027123   10731 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 08:30:05.028492   10731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 08:30:05.030974   10731 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 08:30:05.030997   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:05.031105   10731 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 08:30:05.031124   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:05.080979   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:05.476586   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.432394376s)
	W1101 08:30:05.476629   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:30:05.476652   10731 retry.go:31] will retry after 362.914869ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:30:05.476691   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.131517968s)
	I1101 08:30:05.476891   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.051450818s)
	I1101 08:30:05.476916   10731 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-491859"
	I1101 08:30:05.478896   10731 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 08:30:05.480997   10731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 08:30:05.483732   10731 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 08:30:05.483754   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:05.584950   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:05.585097   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:30:05.735660   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:05.735689   10731 retry.go:31] will retry after 386.234411ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:05.840005   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1101 08:30:05.983211   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:05.984141   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:06.030027   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:06.031630   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:06.122367   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:06.484096   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:06.530216   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:06.531677   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:06.984119   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:07.031030   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:07.031422   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:07.484172   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:07.530028   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:07.531675   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:07.984217   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:08.029911   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:08.031595   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:08.313033   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.472980617s)
	I1101 08:30:08.313133   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.190732222s)
	W1101 08:30:08.313160   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:08.313180   10731 retry.go:31] will retry after 498.995051ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 08:30:08.482791   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:08.483935   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:08.530611   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:08.531226   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:08.813003   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:08.983701   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:09.030593   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:09.030946   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:30:09.353024   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:09.353060   10731 retry.go:31] will retry after 1.048520412s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:09.484232   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:09.531211   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:09.531277   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:09.983610   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:10.030529   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:10.030836   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:10.402391   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 08:30:10.483335   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:10.484402   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:10.530217   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:10.530760   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:30:10.942923   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:10.942956   10731 retry.go:31] will retry after 682.933229ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:10.983672   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:11.030486   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:11.031040   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:11.052949   10731 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 08:30:11.053016   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:11.071657   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:11.182470   10731 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 08:30:11.196199   10731 addons.go:239] Setting addon gcp-auth=true in "addons-491859"
	I1101 08:30:11.196254   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:11.196631   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:11.214892   10731 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 08:30:11.214949   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:11.233617   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:11.333840   10731 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 08:30:11.335074   10731 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:30:11.335999   10731 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 08:30:11.336024   10731 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 08:30:11.349642   10731 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 08:30:11.349664   10731 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 08:30:11.362726   10731 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:30:11.362745   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 08:30:11.376508   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:30:11.483416   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:11.530165   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:11.531617   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:11.626669   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:11.694019   10731 addons.go:480] Verifying addon gcp-auth=true in "addons-491859"
	I1101 08:30:11.695542   10731 out.go:179] * Verifying gcp-auth addon...
	I1101 08:30:11.697650   10731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 08:30:11.700461   10731 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 08:30:11.700483   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:11.983776   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:12.030580   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:12.031052   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:30:12.191761   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:12.191791   10731 retry.go:31] will retry after 2.830148725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:12.200093   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:12.484090   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:12.530958   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:12.531292   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:12.701182   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:12.983164   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:12.984214   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:13.029697   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:13.031328   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:13.200898   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:13.483891   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:13.530735   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:13.530844   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:13.701115   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:13.983835   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:14.030909   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:14.031033   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:14.200833   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:14.483727   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:14.530352   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:14.530900   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:14.700386   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:14.983326   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:14.983972   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:15.023119   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:15.030268   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:15.031042   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:15.201200   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:15.484013   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:15.530091   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:15.530781   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:30:15.560653   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:15.560697   10731 retry.go:31] will retry after 3.900593045s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:15.700450   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:15.983281   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:16.030074   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:16.030561   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:16.200354   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:16.484199   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:16.529992   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:16.531680   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:16.701129   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:16.983685   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:17.030570   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:17.030741   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:17.200322   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:17.483257   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:17.483974   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:17.530732   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:17.531286   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:17.701273   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:17.984294   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:18.029790   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:18.031556   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:18.201466   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:18.483645   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:18.530232   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:18.530991   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:18.700816   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:18.983265   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:19.029817   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:19.031700   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:19.200153   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:19.461470   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:19.483636   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:19.530669   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:19.530979   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:19.700493   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:19.983015   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:19.983726   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:30:20.013982   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:20.014012   10731 retry.go:31] will retry after 2.317231137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:20.030969   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:20.031601   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:20.200193   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:20.484381   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:20.530206   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:20.531821   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:20.700457   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:20.983695   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:21.030375   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:21.030963   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:21.200636   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:21.483893   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:21.530527   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:21.531096   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:21.700823   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:21.983609   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:22.030592   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:22.030897   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:22.200744   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:22.331993   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 08:30:22.483098   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:22.484570   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:22.530469   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:22.531360   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:22.700244   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:22.857684   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:22.857717   10731 retry.go:31] will retry after 8.632870588s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:22.983497   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:23.030363   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:23.030815   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:23.200584   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:23.483626   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:23.530779   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:23.530816   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:23.700646   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:23.983181   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:24.029639   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:24.031328   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:24.200831   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:24.483634   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:24.530201   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:24.530884   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:24.700364   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:24.983519   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:24.984187   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:25.029687   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:25.031784   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:25.200239   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:25.483544   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:25.530104   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:25.530880   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:25.700319   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:25.984276   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:26.029808   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:26.031669   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:26.200152   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:26.484076   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:26.530979   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:26.531360   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:26.700931   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:26.983554   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:27.030415   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:27.030599   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:27.201042   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:27.483165   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:27.483776   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:27.530494   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:27.531077   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:27.700441   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:27.983287   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:28.030332   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:28.031503   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:28.201174   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:28.484264   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:28.529907   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:28.531601   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:28.701075   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:28.983567   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:29.030286   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:29.030635   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:29.200335   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:29.484022   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:29.530974   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:29.531203   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:29.700718   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:29.982342   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:29.983261   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:30.030005   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:30.031455   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:30.201147   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:30.483736   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:30.530542   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:30.531006   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:30.700342   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:30.983973   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:31.030709   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:31.031228   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:31.201218   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:31.483666   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:31.490784   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:31.530772   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:31.531515   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:31.701314   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:31.983234   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:30:32.022305   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:32.022336   10731 retry.go:31] will retry after 9.63990457s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:32.030013   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:32.031473   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:32.201003   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:32.482745   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:32.483469   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:32.530122   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:32.530795   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:32.700386   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:32.983857   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:33.030445   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:33.030962   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:33.200556   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:33.483768   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:33.530776   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:33.531010   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:33.700573   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:33.983287   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:34.030136   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:34.030648   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:34.200283   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:34.482990   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:34.483946   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:34.530899   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:34.531410   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:34.701010   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:34.983569   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:35.030238   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:35.030676   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:35.200150   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:35.483750   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:35.530244   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:35.530897   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:35.700731   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:35.983110   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:36.030968   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:36.031158   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:36.201114   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:36.483175   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:36.484298   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:36.529892   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:36.531505   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:36.700995   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:36.983486   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:37.029986   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:37.030806   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:37.200364   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:37.483746   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:37.530514   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:37.531099   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:37.700743   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:37.983982   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:38.030619   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:38.031026   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:38.201043   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:38.483277   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:38.485249   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:38.529744   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:38.531168   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:38.701485   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:38.983328   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:39.029910   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:39.030580   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:39.200178   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:39.483623   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:39.530268   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:39.530856   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:39.700788   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:39.983215   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:40.029775   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:40.031309   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:40.201011   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:40.483775   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:40.530455   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:40.530994   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:40.700448   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:40.983153   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:40.983243   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:41.029843   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:41.031325   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:41.200632   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:41.483779   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:41.530252   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:41.531050   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:41.663367   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:41.700784   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:41.983610   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:42.030326   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:42.030923   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:42.200567   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:42.217133   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:42.217160   10731 retry.go:31] will retry after 18.203457347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:42.483748   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:42.530591   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:42.530958   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:42.700548   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:42.983348   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:30:42.983404   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:43.030076   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:43.030795   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:43.200333   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:43.483500   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:43.530119   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:43.530675   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:43.700193   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:43.983842   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:44.030479   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:44.030950   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:44.200632   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:44.482486   10731 node_ready.go:49] node "addons-491859" is "Ready"
	I1101 08:30:44.482525   10731 node_ready.go:38] duration metric: took 40.502673113s for node "addons-491859" to be "Ready" ...
	I1101 08:30:44.482554   10731 api_server.go:52] waiting for apiserver process to appear ...
	I1101 08:30:44.482615   10731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:30:44.483452   10731 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 08:30:44.483474   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:44.496904   10731 api_server.go:72] duration metric: took 41.104839041s to wait for apiserver process to appear ...
	I1101 08:30:44.496932   10731 api_server.go:88] waiting for apiserver healthz status ...
	I1101 08:30:44.496952   10731 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 08:30:44.501946   10731 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 08:30:44.502965   10731 api_server.go:141] control plane version: v1.34.1
	I1101 08:30:44.502991   10731 api_server.go:131] duration metric: took 6.052489ms to wait for apiserver health ...
	I1101 08:30:44.503000   10731 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 08:30:44.508541   10731 system_pods.go:59] 20 kube-system pods found
	I1101 08:30:44.508586   10731 system_pods.go:61] "amd-gpu-device-plugin-6twrx" [6d120f25-a6a5-48f2-8849-25607b2e8338] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:30:44.508598   10731 system_pods.go:61] "coredns-66bc5c9577-wp7lb" [eae56377-036f-4eef-89a7-5d685f77fdeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:30:44.508608   10731 system_pods.go:61] "csi-hostpath-attacher-0" [b7fd1d03-fc22-4436-8594-4949ae507ffc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:30:44.508631   10731 system_pods.go:61] "csi-hostpath-resizer-0" [944b7053-40ae-4094-b90e-5a1828ef9297] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:30:44.508640   10731 system_pods.go:61] "csi-hostpathplugin-b7wqd" [00647a0a-0c62-4ce2-a788-8db986f1d092] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:30:44.508646   10731 system_pods.go:61] "etcd-addons-491859" [debd6041-229d-4fe9-b7d3-5d939545f1ee] Running
	I1101 08:30:44.508651   10731 system_pods.go:61] "kindnet-7cj4p" [800b9b84-244b-4262-8df7-589eed5b9599] Running
	I1101 08:30:44.508658   10731 system_pods.go:61] "kube-apiserver-addons-491859" [d9b4572e-30e3-4ec1-ac79-bef8aaf6a60a] Running
	I1101 08:30:44.508663   10731 system_pods.go:61] "kube-controller-manager-addons-491859" [40de3202-f335-43e6-9af3-e7c4a5b50b43] Running
	I1101 08:30:44.508674   10731 system_pods.go:61] "kube-ingress-dns-minikube" [0e191110-51bb-4a21-a2cb-363be938390f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:30:44.508684   10731 system_pods.go:61] "kube-proxy-h22tg" [f2f6b41b-c798-4afd-a685-24ba393d78a7] Running
	I1101 08:30:44.508690   10731 system_pods.go:61] "kube-scheduler-addons-491859" [4e097ceb-3a17-432d-8359-7ad7db3c99da] Running
	I1101 08:30:44.508699   10731 system_pods.go:61] "metrics-server-85b7d694d7-8j2pv" [8863ea1b-774d-469e-8487-d29ec16b131c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:30:44.508709   10731 system_pods.go:61] "nvidia-device-plugin-daemonset-hbv5p" [838833dc-5806-4421-822f-e50f71ba642b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:30:44.508722   10731 system_pods.go:61] "registry-6b586f9694-nlmgw" [81e0129d-d199-423b-a493-623cb2695a4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:30:44.508734   10731 system_pods.go:61] "registry-creds-764b6fb674-rj5zk" [5f281acc-558f-462c-bf98-c52c7b8b34a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:30:44.508747   10731 system_pods.go:61] "registry-proxy-jncr6" [6640fd69-5d62-4d2e-acb5-66ff58f82684] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:30:44.508761   10731 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7bhnn" [2243f94e-8cc4-4e41-9e6b-6e83768aa796] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:44.508775   10731 system_pods.go:61] "snapshot-controller-7d9fbc56b8-c9dzh" [502d593c-55d4-440e-b4f3-2a5f5c53bca7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:44.508787   10731 system_pods.go:61] "storage-provisioner" [4b227df0-0df7-4c55-81bb-20a8928f38ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:30:44.508799   10731 system_pods.go:74] duration metric: took 5.79208ms to wait for pod list to return data ...
	I1101 08:30:44.508813   10731 default_sa.go:34] waiting for default service account to be created ...
	I1101 08:30:44.517793   10731 default_sa.go:45] found service account: "default"
	I1101 08:30:44.517825   10731 default_sa.go:55] duration metric: took 9.003622ms for default service account to be created ...
	I1101 08:30:44.517839   10731 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 08:30:44.532083   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:44.532987   10731 system_pods.go:86] 20 kube-system pods found
	I1101 08:30:44.533015   10731 system_pods.go:89] "amd-gpu-device-plugin-6twrx" [6d120f25-a6a5-48f2-8849-25607b2e8338] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:30:44.533026   10731 system_pods.go:89] "coredns-66bc5c9577-wp7lb" [eae56377-036f-4eef-89a7-5d685f77fdeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:30:44.533036   10731 system_pods.go:89] "csi-hostpath-attacher-0" [b7fd1d03-fc22-4436-8594-4949ae507ffc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:30:44.533045   10731 system_pods.go:89] "csi-hostpath-resizer-0" [944b7053-40ae-4094-b90e-5a1828ef9297] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:30:44.533054   10731 system_pods.go:89] "csi-hostpathplugin-b7wqd" [00647a0a-0c62-4ce2-a788-8db986f1d092] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:30:44.533060   10731 system_pods.go:89] "etcd-addons-491859" [debd6041-229d-4fe9-b7d3-5d939545f1ee] Running
	I1101 08:30:44.533069   10731 system_pods.go:89] "kindnet-7cj4p" [800b9b84-244b-4262-8df7-589eed5b9599] Running
	I1101 08:30:44.533115   10731 system_pods.go:89] "kube-apiserver-addons-491859" [d9b4572e-30e3-4ec1-ac79-bef8aaf6a60a] Running
	I1101 08:30:44.533127   10731 system_pods.go:89] "kube-controller-manager-addons-491859" [40de3202-f335-43e6-9af3-e7c4a5b50b43] Running
	I1101 08:30:44.533137   10731 system_pods.go:89] "kube-ingress-dns-minikube" [0e191110-51bb-4a21-a2cb-363be938390f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:30:44.533143   10731 system_pods.go:89] "kube-proxy-h22tg" [f2f6b41b-c798-4afd-a685-24ba393d78a7] Running
	I1101 08:30:44.533148   10731 system_pods.go:89] "kube-scheduler-addons-491859" [4e097ceb-3a17-432d-8359-7ad7db3c99da] Running
	I1101 08:30:44.533156   10731 system_pods.go:89] "metrics-server-85b7d694d7-8j2pv" [8863ea1b-774d-469e-8487-d29ec16b131c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:30:44.533165   10731 system_pods.go:89] "nvidia-device-plugin-daemonset-hbv5p" [838833dc-5806-4421-822f-e50f71ba642b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:30:44.533177   10731 system_pods.go:89] "registry-6b586f9694-nlmgw" [81e0129d-d199-423b-a493-623cb2695a4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:30:44.533186   10731 system_pods.go:89] "registry-creds-764b6fb674-rj5zk" [5f281acc-558f-462c-bf98-c52c7b8b34a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:30:44.533111   10731 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 08:30:44.533212   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:44.533197   10731 system_pods.go:89] "registry-proxy-jncr6" [6640fd69-5d62-4d2e-acb5-66ff58f82684] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:30:44.533249   10731 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7bhnn" [2243f94e-8cc4-4e41-9e6b-6e83768aa796] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:44.533262   10731 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c9dzh" [502d593c-55d4-440e-b4f3-2a5f5c53bca7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:44.533282   10731 system_pods.go:89] "storage-provisioner" [4b227df0-0df7-4c55-81bb-20a8928f38ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:30:44.533300   10731 retry.go:31] will retry after 223.645933ms: missing components: kube-dns
	I1101 08:30:44.701221   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:44.803418   10731 system_pods.go:86] 20 kube-system pods found
	I1101 08:30:44.803454   10731 system_pods.go:89] "amd-gpu-device-plugin-6twrx" [6d120f25-a6a5-48f2-8849-25607b2e8338] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:30:44.803462   10731 system_pods.go:89] "coredns-66bc5c9577-wp7lb" [eae56377-036f-4eef-89a7-5d685f77fdeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:30:44.803468   10731 system_pods.go:89] "csi-hostpath-attacher-0" [b7fd1d03-fc22-4436-8594-4949ae507ffc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:30:44.803473   10731 system_pods.go:89] "csi-hostpath-resizer-0" [944b7053-40ae-4094-b90e-5a1828ef9297] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:30:44.803482   10731 system_pods.go:89] "csi-hostpathplugin-b7wqd" [00647a0a-0c62-4ce2-a788-8db986f1d092] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:30:44.803485   10731 system_pods.go:89] "etcd-addons-491859" [debd6041-229d-4fe9-b7d3-5d939545f1ee] Running
	I1101 08:30:44.803490   10731 system_pods.go:89] "kindnet-7cj4p" [800b9b84-244b-4262-8df7-589eed5b9599] Running
	I1101 08:30:44.803494   10731 system_pods.go:89] "kube-apiserver-addons-491859" [d9b4572e-30e3-4ec1-ac79-bef8aaf6a60a] Running
	I1101 08:30:44.803497   10731 system_pods.go:89] "kube-controller-manager-addons-491859" [40de3202-f335-43e6-9af3-e7c4a5b50b43] Running
	I1101 08:30:44.803503   10731 system_pods.go:89] "kube-ingress-dns-minikube" [0e191110-51bb-4a21-a2cb-363be938390f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:30:44.803516   10731 system_pods.go:89] "kube-proxy-h22tg" [f2f6b41b-c798-4afd-a685-24ba393d78a7] Running
	I1101 08:30:44.803520   10731 system_pods.go:89] "kube-scheduler-addons-491859" [4e097ceb-3a17-432d-8359-7ad7db3c99da] Running
	I1101 08:30:44.803524   10731 system_pods.go:89] "metrics-server-85b7d694d7-8j2pv" [8863ea1b-774d-469e-8487-d29ec16b131c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:30:44.803537   10731 system_pods.go:89] "nvidia-device-plugin-daemonset-hbv5p" [838833dc-5806-4421-822f-e50f71ba642b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:30:44.803544   10731 system_pods.go:89] "registry-6b586f9694-nlmgw" [81e0129d-d199-423b-a493-623cb2695a4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:30:44.803551   10731 system_pods.go:89] "registry-creds-764b6fb674-rj5zk" [5f281acc-558f-462c-bf98-c52c7b8b34a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:30:44.803557   10731 system_pods.go:89] "registry-proxy-jncr6" [6640fd69-5d62-4d2e-acb5-66ff58f82684] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:30:44.803562   10731 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7bhnn" [2243f94e-8cc4-4e41-9e6b-6e83768aa796] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:44.803570   10731 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c9dzh" [502d593c-55d4-440e-b4f3-2a5f5c53bca7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:44.803574   10731 system_pods.go:89] "storage-provisioner" [4b227df0-0df7-4c55-81bb-20a8928f38ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:30:44.803587   10731 retry.go:31] will retry after 322.669522ms: missing components: kube-dns
	I1101 08:30:44.986702   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:45.086307   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:45.086430   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:45.131081   10731 system_pods.go:86] 20 kube-system pods found
	I1101 08:30:45.131122   10731 system_pods.go:89] "amd-gpu-device-plugin-6twrx" [6d120f25-a6a5-48f2-8849-25607b2e8338] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:30:45.131134   10731 system_pods.go:89] "coredns-66bc5c9577-wp7lb" [eae56377-036f-4eef-89a7-5d685f77fdeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:30:45.131144   10731 system_pods.go:89] "csi-hostpath-attacher-0" [b7fd1d03-fc22-4436-8594-4949ae507ffc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:30:45.131157   10731 system_pods.go:89] "csi-hostpath-resizer-0" [944b7053-40ae-4094-b90e-5a1828ef9297] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:30:45.131168   10731 system_pods.go:89] "csi-hostpathplugin-b7wqd" [00647a0a-0c62-4ce2-a788-8db986f1d092] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:30:45.131178   10731 system_pods.go:89] "etcd-addons-491859" [debd6041-229d-4fe9-b7d3-5d939545f1ee] Running
	I1101 08:30:45.131189   10731 system_pods.go:89] "kindnet-7cj4p" [800b9b84-244b-4262-8df7-589eed5b9599] Running
	I1101 08:30:45.131198   10731 system_pods.go:89] "kube-apiserver-addons-491859" [d9b4572e-30e3-4ec1-ac79-bef8aaf6a60a] Running
	I1101 08:30:45.131204   10731 system_pods.go:89] "kube-controller-manager-addons-491859" [40de3202-f335-43e6-9af3-e7c4a5b50b43] Running
	I1101 08:30:45.131217   10731 system_pods.go:89] "kube-ingress-dns-minikube" [0e191110-51bb-4a21-a2cb-363be938390f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:30:45.131225   10731 system_pods.go:89] "kube-proxy-h22tg" [f2f6b41b-c798-4afd-a685-24ba393d78a7] Running
	I1101 08:30:45.131233   10731 system_pods.go:89] "kube-scheduler-addons-491859" [4e097ceb-3a17-432d-8359-7ad7db3c99da] Running
	I1101 08:30:45.131244   10731 system_pods.go:89] "metrics-server-85b7d694d7-8j2pv" [8863ea1b-774d-469e-8487-d29ec16b131c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:30:45.131252   10731 system_pods.go:89] "nvidia-device-plugin-daemonset-hbv5p" [838833dc-5806-4421-822f-e50f71ba642b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:30:45.131263   10731 system_pods.go:89] "registry-6b586f9694-nlmgw" [81e0129d-d199-423b-a493-623cb2695a4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:30:45.131272   10731 system_pods.go:89] "registry-creds-764b6fb674-rj5zk" [5f281acc-558f-462c-bf98-c52c7b8b34a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:30:45.131299   10731 system_pods.go:89] "registry-proxy-jncr6" [6640fd69-5d62-4d2e-acb5-66ff58f82684] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:30:45.131310   10731 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7bhnn" [2243f94e-8cc4-4e41-9e6b-6e83768aa796] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:45.131322   10731 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c9dzh" [502d593c-55d4-440e-b4f3-2a5f5c53bca7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:45.131334   10731 system_pods.go:89] "storage-provisioner" [4b227df0-0df7-4c55-81bb-20a8928f38ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:30:45.131354   10731 retry.go:31] will retry after 465.248265ms: missing components: kube-dns
	I1101 08:30:45.200911   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:45.485498   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:45.530399   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:45.531769   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:45.601286   10731 system_pods.go:86] 20 kube-system pods found
	I1101 08:30:45.601320   10731 system_pods.go:89] "amd-gpu-device-plugin-6twrx" [6d120f25-a6a5-48f2-8849-25607b2e8338] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:30:45.601327   10731 system_pods.go:89] "coredns-66bc5c9577-wp7lb" [eae56377-036f-4eef-89a7-5d685f77fdeb] Running
	I1101 08:30:45.601339   10731 system_pods.go:89] "csi-hostpath-attacher-0" [b7fd1d03-fc22-4436-8594-4949ae507ffc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:30:45.601346   10731 system_pods.go:89] "csi-hostpath-resizer-0" [944b7053-40ae-4094-b90e-5a1828ef9297] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:30:45.601355   10731 system_pods.go:89] "csi-hostpathplugin-b7wqd" [00647a0a-0c62-4ce2-a788-8db986f1d092] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:30:45.601363   10731 system_pods.go:89] "etcd-addons-491859" [debd6041-229d-4fe9-b7d3-5d939545f1ee] Running
	I1101 08:30:45.601371   10731 system_pods.go:89] "kindnet-7cj4p" [800b9b84-244b-4262-8df7-589eed5b9599] Running
	I1101 08:30:45.601379   10731 system_pods.go:89] "kube-apiserver-addons-491859" [d9b4572e-30e3-4ec1-ac79-bef8aaf6a60a] Running
	I1101 08:30:45.601385   10731 system_pods.go:89] "kube-controller-manager-addons-491859" [40de3202-f335-43e6-9af3-e7c4a5b50b43] Running
	I1101 08:30:45.601398   10731 system_pods.go:89] "kube-ingress-dns-minikube" [0e191110-51bb-4a21-a2cb-363be938390f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:30:45.601403   10731 system_pods.go:89] "kube-proxy-h22tg" [f2f6b41b-c798-4afd-a685-24ba393d78a7] Running
	I1101 08:30:45.601410   10731 system_pods.go:89] "kube-scheduler-addons-491859" [4e097ceb-3a17-432d-8359-7ad7db3c99da] Running
	I1101 08:30:45.601418   10731 system_pods.go:89] "metrics-server-85b7d694d7-8j2pv" [8863ea1b-774d-469e-8487-d29ec16b131c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:30:45.601427   10731 system_pods.go:89] "nvidia-device-plugin-daemonset-hbv5p" [838833dc-5806-4421-822f-e50f71ba642b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:30:45.601438   10731 system_pods.go:89] "registry-6b586f9694-nlmgw" [81e0129d-d199-423b-a493-623cb2695a4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:30:45.601455   10731 system_pods.go:89] "registry-creds-764b6fb674-rj5zk" [5f281acc-558f-462c-bf98-c52c7b8b34a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:30:45.601463   10731 system_pods.go:89] "registry-proxy-jncr6" [6640fd69-5d62-4d2e-acb5-66ff58f82684] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:30:45.601471   10731 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7bhnn" [2243f94e-8cc4-4e41-9e6b-6e83768aa796] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:45.601483   10731 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c9dzh" [502d593c-55d4-440e-b4f3-2a5f5c53bca7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:45.601490   10731 system_pods.go:89] "storage-provisioner" [4b227df0-0df7-4c55-81bb-20a8928f38ea] Running
	I1101 08:30:45.601502   10731 system_pods.go:126] duration metric: took 1.08365464s to wait for k8s-apps to be running ...
	I1101 08:30:45.601516   10731 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 08:30:45.601567   10731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:30:45.619182   10731 system_svc.go:56] duration metric: took 17.656949ms WaitForService to wait for kubelet
	I1101 08:30:45.619219   10731 kubeadm.go:587] duration metric: took 42.227159063s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:30:45.619242   10731 node_conditions.go:102] verifying NodePressure condition ...
	I1101 08:30:45.622145   10731 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 08:30:45.622177   10731 node_conditions.go:123] node cpu capacity is 8
	I1101 08:30:45.622195   10731 node_conditions.go:105] duration metric: took 2.946946ms to run NodePressure ...
	I1101 08:30:45.622209   10731 start.go:242] waiting for startup goroutines ...
	I1101 08:30:45.701355   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:45.984897   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:46.085957   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:46.086015   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:46.200525   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:46.487062   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:46.531249   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:46.532021   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:46.700858   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:46.985526   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:47.033084   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:47.033168   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:47.201822   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:47.484982   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:47.531250   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:47.531712   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:47.701791   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:47.984654   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:48.030405   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:48.030690   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:48.201216   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:48.484723   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:48.530668   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:48.530960   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:48.701046   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:48.985379   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:49.031363   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:49.031510   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:49.201495   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:49.484817   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:49.530909   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:49.531747   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:49.700843   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:49.985574   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:50.030685   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:50.032087   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:50.200993   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:50.484288   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:50.585490   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:50.585510   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:50.701103   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:50.984591   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:51.030583   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:51.030848   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:51.200782   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:51.484898   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:51.530906   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:51.531544   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:51.701904   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:51.984598   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:52.030330   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:52.030956   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:52.200859   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:52.487565   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:52.530634   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:52.531795   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:52.702295   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:52.984680   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:53.030933   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:53.031306   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:53.201746   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:53.484232   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:53.531112   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:53.531487   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:53.701411   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:53.984694   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:54.030389   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:54.030927   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:54.200550   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:54.484797   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:54.530498   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:54.531066   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:54.701467   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:54.984652   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:55.030336   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:55.030945   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:55.200786   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:55.485647   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:55.530498   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:55.530996   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:55.703499   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:55.985040   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:56.031373   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:56.031671   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:56.200587   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:56.484621   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:56.530467   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:56.531015   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:56.700681   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:56.985471   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:57.030667   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:57.031595   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:57.201591   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:57.484926   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:57.531537   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:57.531537   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:57.701365   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:57.985014   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:58.030821   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:58.031053   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:58.201010   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:58.649571   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:58.649614   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:58.649643   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:58.701225   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:58.984571   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:59.030360   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:59.030618   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:59.200652   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:59.485380   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:59.531430   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:59.531639   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:59.701341   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:59.984955   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:00.030686   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:00.085633   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:00.201480   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:00.421798   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:31:00.485235   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:00.531215   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:00.531305   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:00.701174   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:00.984684   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:01.031200   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:01.031841   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:31:01.135289   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:01.135328   10731 retry.go:31] will retry after 17.624454735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:01.201386   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:01.484618   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:01.585394   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:01.585472   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:01.701089   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:01.984558   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:02.030468   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:02.030774   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:02.200774   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:02.484158   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:02.585371   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:02.585505   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:02.701416   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:02.985014   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:03.031157   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:03.031529   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:03.201061   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:03.486284   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:03.530807   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:03.531555   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:03.701660   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:03.985125   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:04.030905   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:04.031451   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:04.201050   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:04.486933   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:04.693911   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:04.694261   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:04.863607   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:04.985566   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:05.030232   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:05.031734   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:05.200393   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:05.488054   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:05.532770   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:05.533594   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:05.700315   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:05.984749   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:06.030486   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:06.030965   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:06.200653   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:06.485309   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:06.531155   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:06.531435   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:06.701566   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:06.986309   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:07.087965   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:07.088108   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:07.200623   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:07.485150   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:07.531705   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:07.531739   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:07.700639   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:07.984849   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:08.030593   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:08.031248   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:08.200555   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:08.486121   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:08.530921   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:08.531459   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:08.701220   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:08.985054   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:09.030930   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:09.031696   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:09.200419   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:09.484693   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:09.585149   10731 kapi.go:107] duration metric: took 1m4.556655418s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 08:31:09.585348   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:09.700900   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:09.987325   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:10.055482   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:10.201183   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:10.485089   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:10.585964   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:10.700899   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:10.985436   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:11.031477   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:11.201310   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:11.484560   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:11.530474   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:11.700579   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:11.985031   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:12.030798   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:12.201147   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:12.484701   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:12.530973   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:12.700765   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:12.985498   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:13.030262   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:13.200936   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:13.484768   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:13.530311   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:13.701165   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:13.984632   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:14.030196   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:14.200684   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:14.485495   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:14.530750   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:14.703955   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:14.984242   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:15.085196   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:15.200914   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:15.484252   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:15.531066   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:15.700906   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:15.984293   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:16.029990   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:16.200480   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:16.485022   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:16.530583   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:16.701357   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:16.988058   10731 kapi.go:107] duration metric: took 1m11.507059289s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 08:31:17.030885   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:17.200572   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:17.530476   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:17.700931   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:18.030926   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:18.200532   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:18.533141   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:18.704339   10731 kapi.go:107] duration metric: took 1m7.006683576s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 08:31:18.706036   10731 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-491859 cluster.
	I1101 08:31:18.707341   10731 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 08:31:18.708535   10731 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 08:31:18.760417   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:31:19.032099   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:19.531387   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:31:19.575975   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:19.576009   10731 retry.go:31] will retry after 21.105929344s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:20.030963   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:20.530362   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:21.030627   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:21.530891   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:22.031208   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:22.530300   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:23.030541   10731 kapi.go:107] duration metric: took 1m18.003419101s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 08:31:40.683989   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 08:31:41.220141   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 08:31:41.220264   10731 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 08:31:41.222065   10731 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, amd-gpu-device-plugin, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1101 08:31:41.223581   10731 addons.go:515] duration metric: took 1m37.831522321s for enable addons: enabled=[registry-creds nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner storage-provisioner-rancher metrics-server yakd default-storageclass amd-gpu-device-plugin volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1101 08:31:41.223641   10731 start.go:247] waiting for cluster config update ...
	I1101 08:31:41.223670   10731 start.go:256] writing updated cluster config ...
	I1101 08:31:41.224041   10731 ssh_runner.go:195] Run: rm -f paused
	I1101 08:31:41.228161   10731 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:31:41.231692   10731 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wp7lb" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:41.236100   10731 pod_ready.go:94] pod "coredns-66bc5c9577-wp7lb" is "Ready"
	I1101 08:31:41.236129   10731 pod_ready.go:86] duration metric: took 4.407953ms for pod "coredns-66bc5c9577-wp7lb" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:41.238151   10731 pod_ready.go:83] waiting for pod "etcd-addons-491859" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:41.242371   10731 pod_ready.go:94] pod "etcd-addons-491859" is "Ready"
	I1101 08:31:41.242395   10731 pod_ready.go:86] duration metric: took 4.222388ms for pod "etcd-addons-491859" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:41.244308   10731 pod_ready.go:83] waiting for pod "kube-apiserver-addons-491859" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:41.248202   10731 pod_ready.go:94] pod "kube-apiserver-addons-491859" is "Ready"
	I1101 08:31:41.248227   10731 pod_ready.go:86] duration metric: took 3.893348ms for pod "kube-apiserver-addons-491859" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:41.250307   10731 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-491859" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:41.632126   10731 pod_ready.go:94] pod "kube-controller-manager-addons-491859" is "Ready"
	I1101 08:31:41.632156   10731 pod_ready.go:86] duration metric: took 381.825433ms for pod "kube-controller-manager-addons-491859" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:41.832213   10731 pod_ready.go:83] waiting for pod "kube-proxy-h22tg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:42.232241   10731 pod_ready.go:94] pod "kube-proxy-h22tg" is "Ready"
	I1101 08:31:42.232267   10731 pod_ready.go:86] duration metric: took 400.022337ms for pod "kube-proxy-h22tg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:42.432216   10731 pod_ready.go:83] waiting for pod "kube-scheduler-addons-491859" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:42.831835   10731 pod_ready.go:94] pod "kube-scheduler-addons-491859" is "Ready"
	I1101 08:31:42.831877   10731 pod_ready.go:86] duration metric: took 399.624296ms for pod "kube-scheduler-addons-491859" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:42.831888   10731 pod_ready.go:40] duration metric: took 1.603688686s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:31:42.875375   10731 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 08:31:42.878437   10731 out.go:179] * Done! kubectl is now configured to use "addons-491859" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.051437739Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-dz9mr/POD" id=24985099-0e34-4bb5-92cc-acb4ceb9d3c7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.051539639Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.058074062Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-dz9mr Namespace:default ID:41c99b7b26e16aafda2cc4389a99d78bcdfc0a244dc552082a9b8eaad2f3f016 UID:8b832b7b-cb75-40f4-a38c-88250ab43306 NetNS:/var/run/netns/b0b81f5f-011d-436b-b9d4-1d00961579d6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000521258}] Aliases:map[]}"
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.058105736Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-dz9mr to CNI network \"kindnet\" (type=ptp)"
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.068464385Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-dz9mr Namespace:default ID:41c99b7b26e16aafda2cc4389a99d78bcdfc0a244dc552082a9b8eaad2f3f016 UID:8b832b7b-cb75-40f4-a38c-88250ab43306 NetNS:/var/run/netns/b0b81f5f-011d-436b-b9d4-1d00961579d6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000521258}] Aliases:map[]}"
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.068598899Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-dz9mr for CNI network kindnet (type=ptp)"
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.069513398Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.07036637Z" level=info msg="Ran pod sandbox 41c99b7b26e16aafda2cc4389a99d78bcdfc0a244dc552082a9b8eaad2f3f016 with infra container: default/hello-world-app-5d498dc89-dz9mr/POD" id=24985099-0e34-4bb5-92cc-acb4ceb9d3c7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.07164245Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d3053ed2-b9d6-4b76-a4ef-0d436b020ed2 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.071803109Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=d3053ed2-b9d6-4b76-a4ef-0d436b020ed2 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.071838181Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=d3053ed2-b9d6-4b76-a4ef-0d436b020ed2 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.072476578Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=865b7354-9b0e-40a2-980f-7219f97882b8 name=/runtime.v1.ImageService/PullImage
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.089197144Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.85794709Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=865b7354-9b0e-40a2-980f-7219f97882b8 name=/runtime.v1.ImageService/PullImage
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.858562447Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=44e58cc9-44b6-477d-82f6-b3dfa1947978 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.860064752Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=74bdaffc-f11b-49d7-980d-c9733d999448 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.86400351Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-dz9mr/hello-world-app" id=1ae7ce1f-6fdc-46b6-9bac-189cb92ebcb0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.864125102Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.870173815Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.870433517Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3253fa74beb7a34c1d9ca1c2196fadb1f7f84dba720d4c9d395c9a7fa975f3f2/merged/etc/passwd: no such file or directory"
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.870475412Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3253fa74beb7a34c1d9ca1c2196fadb1f7f84dba720d4c9d395c9a7fa975f3f2/merged/etc/group: no such file or directory"
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.870726049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.913457329Z" level=info msg="Created container 1d8cacdcc1fa8a9cf1a6ddbad13e0973439d238b0fcc71e6ee179403eed5b24a: default/hello-world-app-5d498dc89-dz9mr/hello-world-app" id=1ae7ce1f-6fdc-46b6-9bac-189cb92ebcb0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.914224604Z" level=info msg="Starting container: 1d8cacdcc1fa8a9cf1a6ddbad13e0973439d238b0fcc71e6ee179403eed5b24a" id=21de268d-622e-4608-8c49-96fe12769efb name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 08:34:30 addons-491859 crio[775]: time="2025-11-01T08:34:30.916785278Z" level=info msg="Started container" PID=10068 containerID=1d8cacdcc1fa8a9cf1a6ddbad13e0973439d238b0fcc71e6ee179403eed5b24a description=default/hello-world-app-5d498dc89-dz9mr/hello-world-app id=21de268d-622e-4608-8c49-96fe12769efb name=/runtime.v1.RuntimeService/StartContainer sandboxID=41c99b7b26e16aafda2cc4389a99d78bcdfc0a244dc552082a9b8eaad2f3f016
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	1d8cacdcc1fa8       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   41c99b7b26e16       hello-world-app-5d498dc89-dz9mr             default
	80a16cf786091       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago       Running             registry-creds                           0                   84f2238dbe5f1       registry-creds-764b6fb674-rj5zk             kube-system
	bf6c885fba97e       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago            Running             nginx                                    0                   b83585ef3500d       nginx                                       default
	8ee7f2ba44bc3       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   116db8fc3fe4f       busybox                                     default
	9ffcf3bba5109       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             3 minutes ago            Running             controller                               0                   5e177cfc53e3a       ingress-nginx-controller-675c5ddd98-6nth2   ingress-nginx
	53b633b13927f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago            Running             gcp-auth                                 0                   ea4e664f598d0       gcp-auth-78565c9fb4-z7tgf                   gcp-auth
	33e4e1fc1e330       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          3 minutes ago            Running             csi-snapshotter                          0                   d3a4a44899566       csi-hostpathplugin-b7wqd                    kube-system
	f04af0fd3a62d       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago            Running             csi-provisioner                          0                   d3a4a44899566       csi-hostpathplugin-b7wqd                    kube-system
	a1f3a49b7f394       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago            Running             liveness-probe                           0                   d3a4a44899566       csi-hostpathplugin-b7wqd                    kube-system
	ff4bdf52bbb88       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago            Running             hostpath                                 0                   d3a4a44899566       csi-hostpathplugin-b7wqd                    kube-system
	a2c23a5170ee9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago            Running             gadget                                   0                   1ec45885f2df1       gadget-ggvnk                                gadget
	9d1050c081be9       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago            Running             node-driver-registrar                    0                   d3a4a44899566       csi-hostpathplugin-b7wqd                    kube-system
	2c81dda5dfe97       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago            Running             registry-proxy                           0                   4da6c27c4f365       registry-proxy-jncr6                        kube-system
	0f17b27c9fb94       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   d3a4a44899566       csi-hostpathplugin-b7wqd                    kube-system
	3070142e88965       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   bfd96b8a614be       nvidia-device-plugin-daemonset-hbv5p        kube-system
	e0fe6aa919f9f       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   eac5d4e55668b       amd-gpu-device-plugin-6twrx                 kube-system
	dd32f839b496a       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   fe18cc9a7812e       csi-hostpath-resizer-0                      kube-system
	97c62d07dbe74       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              patch                                    0                   0f801687b0a22       ingress-nginx-admission-patch-lsz25         ingress-nginx
	36098b90e218e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              create                                   0                   bb97b090a41f6       ingress-nginx-admission-create-hh4rd        ingress-nginx
	b8dc66998b8c6       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   500c0d0de486f       csi-hostpath-attacher-0                     kube-system
	c4c4e8392feed       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   28ad2d10970c5       snapshot-controller-7d9fbc56b8-7bhnn        kube-system
	a4c41d6f050f2       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   2f05ff87cd027       snapshot-controller-7d9fbc56b8-c9dzh        kube-system
	f5bdbd7214479       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   29a5ea326e6b9       yakd-dashboard-5ff678cb9-kmsmc              yakd-dashboard
	64b9c1289b678       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   e62902b579140       local-path-provisioner-648f6765c9-5mm52     local-path-storage
	48f5680426820       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago            Running             cloud-spanner-emulator                   0                   a077f7d3f0ee4       cloud-spanner-emulator-86bd5cbb97-d2cmm     default
	2b4413f8423a3       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   fb949762c0cdd       registry-6b586f9694-nlmgw                   kube-system
	73d495a359ef0       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   8b8a00c31af6f       kube-ingress-dns-minikube                   kube-system
	18fc9837ab4ea       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   2726e88d1b32a       metrics-server-85b7d694d7-8j2pv             kube-system
	87757a0f68b4c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   83cc6f84d17be       storage-provisioner                         kube-system
	f17c2b6b25fbc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   d8dfe32ac11ec       coredns-66bc5c9577-wp7lb                    kube-system
	c60507f296e95       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   482fcc52b0cd5       kindnet-7cj4p                               kube-system
	4c1ad1a76dfd8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago            Running             kube-proxy                               0                   1caf02966b145       kube-proxy-h22tg                            kube-system
	808e84f4795d8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   c3ed8efa0da25       kube-controller-manager-addons-491859       kube-system
	d4c72eaef4436       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   cc3435e17324d       kube-apiserver-addons-491859                kube-system
	cdda903ada754       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   44ebfd68de35e       kube-scheduler-addons-491859                kube-system
	b29235edc5383       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   1117cb301cd25       etcd-addons-491859                          kube-system
	
	
	==> coredns [f17c2b6b25fbc93674f08548b5309b9715e2cb6d7600ac5d6557505072b3fb3f] <==
	[INFO] 10.244.0.22:33116 - 21970 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004331009s
	[INFO] 10.244.0.22:51353 - 1685 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00524232s
	[INFO] 10.244.0.22:58439 - 28947 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005423286s
	[INFO] 10.244.0.22:38115 - 17408 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004730004s
	[INFO] 10.244.0.22:37500 - 51497 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004843339s
	[INFO] 10.244.0.22:45730 - 47396 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001424946s
	[INFO] 10.244.0.22:53694 - 19739 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002149264s
	[INFO] 10.244.0.27:49586 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000244414s
	[INFO] 10.244.0.27:45977 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000147382s
	[INFO] 10.244.0.31:53987 - 46336 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000187175s
	[INFO] 10.244.0.31:39786 - 22081 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000295061s
	[INFO] 10.244.0.31:36012 - 54452 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000108013s
	[INFO] 10.244.0.31:49530 - 24240 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.00015105s
	[INFO] 10.244.0.31:37388 - 40686 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000087272s
	[INFO] 10.244.0.31:38784 - 7838 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000141132s
	[INFO] 10.244.0.31:33469 - 48175 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003273631s
	[INFO] 10.244.0.31:50705 - 31932 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.00327101s
	[INFO] 10.244.0.31:53341 - 1894 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004506232s
	[INFO] 10.244.0.31:45984 - 12611 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004606972s
	[INFO] 10.244.0.31:35423 - 32971 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004650366s
	[INFO] 10.244.0.31:38515 - 39336 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005073532s
	[INFO] 10.244.0.31:58683 - 56577 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004242453s
	[INFO] 10.244.0.31:32846 - 29884 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004391194s
	[INFO] 10.244.0.31:34105 - 26756 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.00193602s
	[INFO] 10.244.0.31:59527 - 26992 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.002307163s
	
	
	==> describe nodes <==
	Name:               addons-491859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-491859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=addons-491859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T08_29_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-491859
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-491859"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 08:29:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-491859
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 08:34:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 08:33:53 +0000   Sat, 01 Nov 2025 08:29:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 08:33:53 +0000   Sat, 01 Nov 2025 08:29:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 08:33:53 +0000   Sat, 01 Nov 2025 08:29:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 08:33:53 +0000   Sat, 01 Nov 2025 08:30:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-491859
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                5c1e6350-6319-4483-8aa0-6397d62a761e
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
	  default                     cloud-spanner-emulator-86bd5cbb97-d2cmm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  default                     hello-world-app-5d498dc89-dz9mr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-ggvnk                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  gcp-auth                    gcp-auth-78565c9fb4-z7tgf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-6nth2    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m27s
	  kube-system                 amd-gpu-device-plugin-6twrx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 coredns-66bc5c9577-wp7lb                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m28s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 csi-hostpathplugin-b7wqd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 etcd-addons-491859                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m34s
	  kube-system                 kindnet-7cj4p                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m29s
	  kube-system                 kube-apiserver-addons-491859                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-controller-manager-addons-491859        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-proxy-h22tg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-scheduler-addons-491859                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 metrics-server-85b7d694d7-8j2pv              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m27s
	  kube-system                 nvidia-device-plugin-daemonset-hbv5p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 registry-6b586f9694-nlmgw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 registry-creds-764b6fb674-rj5zk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 registry-proxy-jncr6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 snapshot-controller-7d9fbc56b8-7bhnn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 snapshot-controller-7d9fbc56b8-c9dzh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  local-path-storage          local-path-provisioner-648f6765c9-5mm52      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-kmsmc               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m27s                  kube-proxy       
	  Normal  Starting                 4m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m39s (x8 over 4m39s)  kubelet          Node addons-491859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m39s (x8 over 4m39s)  kubelet          Node addons-491859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m39s (x8 over 4m39s)  kubelet          Node addons-491859 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m34s                  kubelet          Node addons-491859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m34s                  kubelet          Node addons-491859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m34s                  kubelet          Node addons-491859 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m29s                  node-controller  Node addons-491859 event: Registered Node addons-491859 in Controller
	  Normal  NodeReady                3m47s                  kubelet          Node addons-491859 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.095214] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027343] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.469466] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 1 08:32] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee bc 84 54 94 a6 36 9d 7b 9d 95 bf 08 00
	[  +1.054651] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ee bc 84 54 94 a6 36 9d 7b 9d 95 bf 08 00
	[  +1.023935] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: ee bc 84 54 94 a6 36 9d 7b 9d 95 bf 08 00
	[  +1.024867] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee bc 84 54 94 a6 36 9d 7b 9d 95 bf 08 00
	[  +1.022853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ee bc 84 54 94 a6 36 9d 7b 9d 95 bf 08 00
	[  +1.023916] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: ee bc 84 54 94 a6 36 9d 7b 9d 95 bf 08 00
	[  +2.047754] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee bc 84 54 94 a6 36 9d 7b 9d 95 bf 08 00
	[  +4.032545] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ee bc 84 54 94 a6 36 9d 7b 9d 95 bf 08 00
	[  +8.190142] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: ee bc 84 54 94 a6 36 9d 7b 9d 95 bf 08 00
	[ +16.382282] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ee bc 84 54 94 a6 36 9d 7b 9d 95 bf 08 00
	[Nov 1 08:33] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ee bc 84 54 94 a6 36 9d 7b 9d 95 bf 08 00
	
	
	==> etcd [b29235edc538375f7d6a03229d64488db46c49997437218731a8cd64edc28444] <==
	{"level":"warn","ts":"2025-11-01T08:30:32.137630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:58.641093Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.235874ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:30:58.641276Z","caller":"traceutil/trace.go:172","msg":"trace[1071687753] range","detail":"{range_begin:/registry/replicasets; range_end:; response_count:0; response_revision:1050; }","duration":"184.463028ms","start":"2025-11-01T08:30:58.456793Z","end":"2025-11-01T08:30:58.641256Z","steps":["trace[1071687753] 'agreement among raft nodes before linearized reading'  (duration: 73.383718ms)","trace[1071687753] 'range keys from in-memory index tree'  (duration: 110.825372ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:30:58.644032Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.993001ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128041020883280613 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-2265b\" mod_revision:1048 > success:<request_put:<key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-2265b\" value_size:4415 >> failure:<request_range:<key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-2265b\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T08:30:58.644152Z","caller":"traceutil/trace.go:172","msg":"trace[237136442] linearizableReadLoop","detail":"{readStateIndex:1078; appliedIndex:1077; }","duration":"113.989318ms","start":"2025-11-01T08:30:58.530149Z","end":"2025-11-01T08:30:58.644138Z","steps":["trace[237136442] 'read index received'  (duration: 40.599µs)","trace[237136442] 'applied index is now lower than readState.Index'  (duration: 113.947442ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:30:58.644283Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.423891ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:30:58.644324Z","caller":"traceutil/trace.go:172","msg":"trace[540881731] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1051; }","duration":"161.452999ms","start":"2025-11-01T08:30:58.482848Z","end":"2025-11-01T08:30:58.644301Z","steps":["trace[540881731] 'agreement among raft nodes before linearized reading'  (duration: 161.345896ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:30:58.644546Z","caller":"traceutil/trace.go:172","msg":"trace[1116447530] transaction","detail":"{read_only:false; response_revision:1051; number_of_response:1; }","duration":"206.893581ms","start":"2025-11-01T08:30:58.437629Z","end":"2025-11-01T08:30:58.644522Z","steps":["trace[1116447530] 'process raft request'  (duration: 92.553284ms)","trace[1116447530] 'compare'  (duration: 110.856274ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:30:58.644735Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.627641ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:30:58.644772Z","caller":"traceutil/trace.go:172","msg":"trace[298041514] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1051; }","duration":"114.66872ms","start":"2025-11-01T08:30:58.530095Z","end":"2025-11-01T08:30:58.644764Z","steps":["trace[298041514] 'agreement among raft nodes before linearized reading'  (duration: 114.597534ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:30:58.644913Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.365904ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:30:58.644940Z","caller":"traceutil/trace.go:172","msg":"trace[840698398] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1051; }","duration":"115.392476ms","start":"2025-11-01T08:30:58.529540Z","end":"2025-11-01T08:30:58.644932Z","steps":["trace[840698398] 'agreement among raft nodes before linearized reading'  (duration: 115.344117ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:31:04.691597Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.39284ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:31:04.691652Z","caller":"traceutil/trace.go:172","msg":"trace[884222080] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1115; }","duration":"162.456817ms","start":"2025-11-01T08:31:04.529181Z","end":"2025-11-01T08:31:04.691637Z","steps":["trace[884222080] 'agreement among raft nodes before linearized reading'  (duration: 37.985334ms)","trace[884222080] 'range keys from in-memory index tree'  (duration: 124.379268ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:31:04.691660Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.403486ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128041020883280793 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/amd-gpu-device-plugin-6twrx\" mod_revision:926 > success:<request_put:<key:\"/registry/pods/kube-system/amd-gpu-device-plugin-6twrx\" value_size:4565 >> failure:<request_range:<key:\"/registry/pods/kube-system/amd-gpu-device-plugin-6twrx\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T08:31:04.691829Z","caller":"traceutil/trace.go:172","msg":"trace[994324902] transaction","detail":"{read_only:false; response_revision:1116; number_of_response:1; }","duration":"214.346723ms","start":"2025-11-01T08:31:04.477463Z","end":"2025-11-01T08:31:04.691810Z","steps":["trace[994324902] 'process raft request'  (duration: 89.731375ms)","trace[994324902] 'compare'  (duration: 124.328345ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T08:31:04.691833Z","caller":"traceutil/trace.go:172","msg":"trace[873270259] linearizableReadLoop","detail":"{readStateIndex:1144; appliedIndex:1143; }","duration":"124.674198ms","start":"2025-11-01T08:31:04.567145Z","end":"2025-11-01T08:31:04.691819Z","steps":["trace[873270259] 'read index received'  (duration: 123.621805ms)","trace[873270259] 'applied index is now lower than readState.Index'  (duration: 1.050971ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T08:31:04.691858Z","caller":"traceutil/trace.go:172","msg":"trace[1199295295] transaction","detail":"{read_only:false; response_revision:1117; number_of_response:1; }","duration":"159.964426ms","start":"2025-11-01T08:31:04.531884Z","end":"2025-11-01T08:31:04.691849Z","steps":["trace[1199295295] 'process raft request'  (duration: 159.873308ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:31:04.692004Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.923497ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:31:04.692031Z","caller":"traceutil/trace.go:172","msg":"trace[1161329594] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1118; }","duration":"161.952504ms","start":"2025-11-01T08:31:04.530070Z","end":"2025-11-01T08:31:04.692023Z","steps":["trace[1161329594] 'agreement among raft nodes before linearized reading'  (duration: 161.844345ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:31:04.692041Z","caller":"traceutil/trace.go:172","msg":"trace[2135679475] transaction","detail":"{read_only:false; response_revision:1118; number_of_response:1; }","duration":"150.532136ms","start":"2025-11-01T08:31:04.541501Z","end":"2025-11-01T08:31:04.692033Z","steps":["trace[2135679475] 'process raft request'  (duration: 150.316182ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:31:04.861925Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"163.17141ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:31:04.862002Z","caller":"traceutil/trace.go:172","msg":"trace[974332960] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1119; }","duration":"163.255158ms","start":"2025-11-01T08:31:04.698729Z","end":"2025-11-01T08:31:04.861984Z","steps":["trace[974332960] 'agreement among raft nodes before linearized reading'  (duration: 136.517634ms)","trace[974332960] 'range keys from in-memory index tree'  (duration: 26.627503ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T08:31:04.862107Z","caller":"traceutil/trace.go:172","msg":"trace[1181685282] transaction","detail":"{read_only:false; response_revision:1121; number_of_response:1; }","duration":"164.056879ms","start":"2025-11-01T08:31:04.698042Z","end":"2025-11-01T08:31:04.862099Z","steps":["trace[1181685282] 'process raft request'  (duration: 163.942969ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:31:04.862126Z","caller":"traceutil/trace.go:172","msg":"trace[1626285233] transaction","detail":"{read_only:false; response_revision:1120; number_of_response:1; }","duration":"164.617421ms","start":"2025-11-01T08:31:04.697489Z","end":"2025-11-01T08:31:04.862106Z","steps":["trace[1626285233] 'process raft request'  (duration: 137.759732ms)","trace[1626285233] 'compare'  (duration: 26.590567ms)"],"step_count":2}
	
	
	==> gcp-auth [53b633b13927f21d413e05b002d8fbcad4f3906c181691eb3e2d4a91d26ff070] <==
	2025/11/01 08:31:18 GCP Auth Webhook started!
	2025/11/01 08:31:43 Ready to marshal response ...
	2025/11/01 08:31:43 Ready to write response ...
	2025/11/01 08:31:43 Ready to marshal response ...
	2025/11/01 08:31:43 Ready to write response ...
	2025/11/01 08:31:43 Ready to marshal response ...
	2025/11/01 08:31:43 Ready to write response ...
	2025/11/01 08:31:51 Ready to marshal response ...
	2025/11/01 08:31:51 Ready to write response ...
	2025/11/01 08:31:51 Ready to marshal response ...
	2025/11/01 08:31:51 Ready to write response ...
	2025/11/01 08:31:58 Ready to marshal response ...
	2025/11/01 08:31:58 Ready to write response ...
	2025/11/01 08:32:01 Ready to marshal response ...
	2025/11/01 08:32:01 Ready to write response ...
	2025/11/01 08:32:04 Ready to marshal response ...
	2025/11/01 08:32:04 Ready to write response ...
	2025/11/01 08:32:15 Ready to marshal response ...
	2025/11/01 08:32:15 Ready to write response ...
	2025/11/01 08:32:40 Ready to marshal response ...
	2025/11/01 08:32:40 Ready to write response ...
	2025/11/01 08:34:29 Ready to marshal response ...
	2025/11/01 08:34:29 Ready to write response ...
	
	
	==> kernel <==
	 08:34:31 up 16 min,  0 user,  load average: 0.38, 0.54, 0.27
	Linux addons-491859 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c60507f296e9571f568638d8fac5f87c704242925cf7b4aa378db809c3e3176b] <==
	I1101 08:32:23.947501       1 main.go:301] handling current node
	I1101 08:32:33.948104       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:32:33.948138       1 main.go:301] handling current node
	I1101 08:32:43.947734       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:32:43.947774       1 main.go:301] handling current node
	I1101 08:32:53.947177       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:32:53.947220       1 main.go:301] handling current node
	I1101 08:33:03.947424       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:33:03.947461       1 main.go:301] handling current node
	I1101 08:33:13.947531       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:33:13.947570       1 main.go:301] handling current node
	I1101 08:33:23.947444       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:33:23.947504       1 main.go:301] handling current node
	I1101 08:33:33.947831       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:33:33.947877       1 main.go:301] handling current node
	I1101 08:33:43.947536       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:33:43.947569       1 main.go:301] handling current node
	I1101 08:33:53.947924       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:33:53.947955       1 main.go:301] handling current node
	I1101 08:34:03.947195       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:34:03.947230       1 main.go:301] handling current node
	I1101 08:34:13.947508       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:34:13.947549       1 main.go:301] handling current node
	I1101 08:34:23.947803       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:34:23.947838       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d4c72eaef44361ee96a4bd979c97bf60778beaa33b136b1407ada4c192a11240] <==
	 > logger="UnhandledError"
	E1101 08:30:47.748938       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.158.97:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.158.97:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.158.97:443: connect: connection refused" logger="UnhandledError"
	E1101 08:30:47.750580       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.158.97:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.158.97:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.158.97:443: connect: connection refused" logger="UnhandledError"
	W1101 08:30:48.749069       1 handler_proxy.go:99] no RequestInfo found in the context
	W1101 08:30:48.749105       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:30:48.749138       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1101 08:30:48.749156       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1101 08:30:48.749167       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1101 08:30:48.750333       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1101 08:30:52.760503       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.158.97:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.158.97:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W1101 08:30:52.760559       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:30:52.760603       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1101 08:30:52.770851       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 08:31:50.557803       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60866: use of closed network connection
	E1101 08:31:50.708797       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60912: use of closed network connection
	I1101 08:32:04.471470       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 08:32:04.668082       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.126.137"}
	I1101 08:32:24.528993       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1101 08:34:29.826978       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.249.249"}
	
	
	==> kube-controller-manager [808e84f4795d8f4c21c0316fc7e7437f547a0d653508d0419288257452ccf97b] <==
	I1101 08:30:02.088639       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 08:30:02.088652       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 08:30:02.088660       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 08:30:02.088804       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 08:30:02.088844       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 08:30:02.088924       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 08:30:02.088939       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 08:30:02.089175       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 08:30:02.089545       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 08:30:02.089585       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 08:30:02.089603       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 08:30:02.090742       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 08:30:02.091903       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 08:30:02.094199       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 08:30:02.094240       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 08:30:02.104432       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 08:30:02.112098       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1101 08:30:32.098335       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:30:32.098472       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1101 08:30:32.098504       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 08:30:32.121699       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 08:30:32.125108       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 08:30:32.199240       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 08:30:32.225463       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 08:30:47.040659       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4c1ad1a76dfd8c2d5f52abbb115ac02bc7871e82ffe8eed8b2bce0574ab65ce3] <==
	I1101 08:30:03.433198       1 server_linux.go:53] "Using iptables proxy"
	I1101 08:30:03.609646       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 08:30:03.711817       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 08:30:03.711877       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 08:30:03.711960       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 08:30:03.741388       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 08:30:03.741516       1 server_linux.go:132] "Using iptables Proxier"
	I1101 08:30:03.748942       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 08:30:03.756071       1 server.go:527] "Version info" version="v1.34.1"
	I1101 08:30:03.756107       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:30:03.759660       1 config.go:200] "Starting service config controller"
	I1101 08:30:03.759682       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 08:30:03.759714       1 config.go:106] "Starting endpoint slice config controller"
	I1101 08:30:03.759721       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 08:30:03.759737       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 08:30:03.759742       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 08:30:03.760675       1 config.go:309] "Starting node config controller"
	I1101 08:30:03.760685       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 08:30:03.760693       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 08:30:03.859942       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 08:30:03.860779       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 08:30:03.864557       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [cdda903ada7547082c69c9584210b581ad9bfe62602052de013ddc3c59d043bc] <==
	E1101 08:29:54.659730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:29:54.659837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:29:54.660024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:29:54.660044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 08:29:54.660092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 08:29:54.660090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 08:29:54.660134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 08:29:54.660185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 08:29:55.508830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:29:55.537173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:29:55.558893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 08:29:55.564922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:29:55.581322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 08:29:55.603924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 08:29:55.603943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:29:55.608299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 08:29:55.645608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 08:29:55.678147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 08:29:55.735284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 08:29:55.751361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 08:29:55.831703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 08:29:55.836835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 08:29:55.851064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 08:29:55.877552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1101 08:29:57.851557       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 08:32:47 addons-491859 kubelet[1302]: I1101 08:32:47.917402    1302 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0f7fb92d-6302-49e0-8f4b-8df78487ef42-gcp-creds\") pod \"0f7fb92d-6302-49e0-8f4b-8df78487ef42\" (UID: \"0f7fb92d-6302-49e0-8f4b-8df78487ef42\") "
	Nov 01 08:32:47 addons-491859 kubelet[1302]: I1101 08:32:47.917532    1302 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^53cc22c2-b6fd-11f0-ac28-12912ffa2335\") pod \"0f7fb92d-6302-49e0-8f4b-8df78487ef42\" (UID: \"0f7fb92d-6302-49e0-8f4b-8df78487ef42\") "
	Nov 01 08:32:47 addons-491859 kubelet[1302]: I1101 08:32:47.917528    1302 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f7fb92d-6302-49e0-8f4b-8df78487ef42-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "0f7fb92d-6302-49e0-8f4b-8df78487ef42" (UID: "0f7fb92d-6302-49e0-8f4b-8df78487ef42"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 01 08:32:47 addons-491859 kubelet[1302]: I1101 08:32:47.917672    1302 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0f7fb92d-6302-49e0-8f4b-8df78487ef42-gcp-creds\") on node \"addons-491859\" DevicePath \"\""
	Nov 01 08:32:47 addons-491859 kubelet[1302]: I1101 08:32:47.919764    1302 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f7fb92d-6302-49e0-8f4b-8df78487ef42-kube-api-access-zsdvj" (OuterVolumeSpecName: "kube-api-access-zsdvj") pod "0f7fb92d-6302-49e0-8f4b-8df78487ef42" (UID: "0f7fb92d-6302-49e0-8f4b-8df78487ef42"). InnerVolumeSpecName "kube-api-access-zsdvj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 01 08:32:47 addons-491859 kubelet[1302]: I1101 08:32:47.920593    1302 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^53cc22c2-b6fd-11f0-ac28-12912ffa2335" (OuterVolumeSpecName: "task-pv-storage") pod "0f7fb92d-6302-49e0-8f4b-8df78487ef42" (UID: "0f7fb92d-6302-49e0-8f4b-8df78487ef42"). InnerVolumeSpecName "pvc-65dbf71b-4a04-4d17-b569-5db66ba82c58". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 01 08:32:47 addons-491859 kubelet[1302]: I1101 08:32:47.938125    1302 scope.go:117] "RemoveContainer" containerID="12d4071fb297814cbb2eb280d8fb1fcb30ebd37cdfe51e5a9469facd1b697ebe"
	Nov 01 08:32:47 addons-491859 kubelet[1302]: I1101 08:32:47.949148    1302 scope.go:117] "RemoveContainer" containerID="12d4071fb297814cbb2eb280d8fb1fcb30ebd37cdfe51e5a9469facd1b697ebe"
	Nov 01 08:32:47 addons-491859 kubelet[1302]: E1101 08:32:47.949660    1302 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12d4071fb297814cbb2eb280d8fb1fcb30ebd37cdfe51e5a9469facd1b697ebe\": container with ID starting with 12d4071fb297814cbb2eb280d8fb1fcb30ebd37cdfe51e5a9469facd1b697ebe not found: ID does not exist" containerID="12d4071fb297814cbb2eb280d8fb1fcb30ebd37cdfe51e5a9469facd1b697ebe"
	Nov 01 08:32:47 addons-491859 kubelet[1302]: I1101 08:32:47.949708    1302 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12d4071fb297814cbb2eb280d8fb1fcb30ebd37cdfe51e5a9469facd1b697ebe"} err="failed to get container status \"12d4071fb297814cbb2eb280d8fb1fcb30ebd37cdfe51e5a9469facd1b697ebe\": rpc error: code = NotFound desc = could not find container \"12d4071fb297814cbb2eb280d8fb1fcb30ebd37cdfe51e5a9469facd1b697ebe\": container with ID starting with 12d4071fb297814cbb2eb280d8fb1fcb30ebd37cdfe51e5a9469facd1b697ebe not found: ID does not exist"
	Nov 01 08:32:48 addons-491859 kubelet[1302]: I1101 08:32:48.018411    1302 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-65dbf71b-4a04-4d17-b569-5db66ba82c58\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^53cc22c2-b6fd-11f0-ac28-12912ffa2335\") on node \"addons-491859\" "
	Nov 01 08:32:48 addons-491859 kubelet[1302]: I1101 08:32:48.018456    1302 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsdvj\" (UniqueName: \"kubernetes.io/projected/0f7fb92d-6302-49e0-8f4b-8df78487ef42-kube-api-access-zsdvj\") on node \"addons-491859\" DevicePath \"\""
	Nov 01 08:32:48 addons-491859 kubelet[1302]: I1101 08:32:48.024066    1302 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-65dbf71b-4a04-4d17-b569-5db66ba82c58" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^53cc22c2-b6fd-11f0-ac28-12912ffa2335") on node "addons-491859"
	Nov 01 08:32:48 addons-491859 kubelet[1302]: I1101 08:32:48.119053    1302 reconciler_common.go:299] "Volume detached for volume \"pvc-65dbf71b-4a04-4d17-b569-5db66ba82c58\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^53cc22c2-b6fd-11f0-ac28-12912ffa2335\") on node \"addons-491859\" DevicePath \"\""
	Nov 01 08:32:49 addons-491859 kubelet[1302]: I1101 08:32:49.250283    1302 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f7fb92d-6302-49e0-8f4b-8df78487ef42" path="/var/lib/kubelet/pods/0f7fb92d-6302-49e0-8f4b-8df78487ef42/volumes"
	Nov 01 08:32:57 addons-491859 kubelet[1302]: I1101 08:32:57.710396    1302 scope.go:117] "RemoveContainer" containerID="40a8705b53ec62b56f0443b6af8682bf7a7fa672302fd4b63cc53c6d0500f08f"
	Nov 01 08:32:57 addons-491859 kubelet[1302]: I1101 08:32:57.719586    1302 scope.go:117] "RemoveContainer" containerID="337f0ea7277dc0b57c07ca2a70db0332fa9b435e46902fc55925adb8ea046fa2"
	Nov 01 08:32:57 addons-491859 kubelet[1302]: I1101 08:32:57.727690    1302 scope.go:117] "RemoveContainer" containerID="617714330e3c63b1357ecd5ecd0fe70ee6f2d6ba4271c29146509fc2709fbd9a"
	Nov 01 08:33:16 addons-491859 kubelet[1302]: I1101 08:33:16.247847    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-6twrx" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:33:40 addons-491859 kubelet[1302]: I1101 08:33:40.247962    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-hbv5p" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:33:55 addons-491859 kubelet[1302]: I1101 08:33:55.249184    1302 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-jncr6" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:34:29 addons-491859 kubelet[1302]: I1101 08:34:29.744421    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-rj5zk" podStartSLOduration=264.641831441 podStartE2EDuration="4m25.744399111s" podCreationTimestamp="2025-11-01 08:30:04 +0000 UTC" firstStartedPulling="2025-11-01 08:32:58.271108108 +0000 UTC m=+181.106504581" lastFinishedPulling="2025-11-01 08:32:59.373675775 +0000 UTC m=+182.209072251" observedRunningTime="2025-11-01 08:32:59.998990298 +0000 UTC m=+182.834386789" watchObservedRunningTime="2025-11-01 08:34:29.744399111 +0000 UTC m=+272.579795601"
	Nov 01 08:34:29 addons-491859 kubelet[1302]: I1101 08:34:29.907120    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8b832b7b-cb75-40f4-a38c-88250ab43306-gcp-creds\") pod \"hello-world-app-5d498dc89-dz9mr\" (UID: \"8b832b7b-cb75-40f4-a38c-88250ab43306\") " pod="default/hello-world-app-5d498dc89-dz9mr"
	Nov 01 08:34:29 addons-491859 kubelet[1302]: I1101 08:34:29.907199    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz2jb\" (UniqueName: \"kubernetes.io/projected/8b832b7b-cb75-40f4-a38c-88250ab43306-kube-api-access-fz2jb\") pod \"hello-world-app-5d498dc89-dz9mr\" (UID: \"8b832b7b-cb75-40f4-a38c-88250ab43306\") " pod="default/hello-world-app-5d498dc89-dz9mr"
	Nov 01 08:34:31 addons-491859 kubelet[1302]: I1101 08:34:31.340553    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-dz9mr" podStartSLOduration=1.553271039 podStartE2EDuration="2.340532689s" podCreationTimestamp="2025-11-01 08:34:29 +0000 UTC" firstStartedPulling="2025-11-01 08:34:30.072138673 +0000 UTC m=+272.907535141" lastFinishedPulling="2025-11-01 08:34:30.859400307 +0000 UTC m=+273.694796791" observedRunningTime="2025-11-01 08:34:31.340313915 +0000 UTC m=+274.175710408" watchObservedRunningTime="2025-11-01 08:34:31.340532689 +0000 UTC m=+274.175929179"
	
	
	==> storage-provisioner [87757a0f68b4c84bb6b6a0830633d098f646e6d9b6aa521bccfaeb77b540635f] <==
	W1101 08:34:05.608346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:07.611559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:07.616382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:09.619978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:09.624457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:11.627840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:11.631772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:13.634492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:13.638159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:15.641562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:15.646554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:17.649920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:17.654400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:19.657732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:19.662689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:21.665740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:21.670793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:23.673599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:23.677961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:25.681387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:25.689690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:27.692195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:27.695963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:29.699257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:29.703377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-491859 -n addons-491859
helpers_test.go:269: (dbg) Run:  kubectl --context addons-491859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-hh4rd ingress-nginx-admission-patch-lsz25
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-491859 describe pod ingress-nginx-admission-create-hh4rd ingress-nginx-admission-patch-lsz25
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-491859 describe pod ingress-nginx-admission-create-hh4rd ingress-nginx-admission-patch-lsz25: exit status 1 (58.08023ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hh4rd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-lsz25" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-491859 describe pod ingress-nginx-admission-create-hh4rd ingress-nginx-admission-patch-lsz25: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-491859 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (245.660518ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:34:32.356233   25701 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:34:32.356422   25701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:34:32.356433   25701 out.go:374] Setting ErrFile to fd 2...
	I1101 08:34:32.356438   25701 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:34:32.356644   25701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:34:32.356928   25701 mustload.go:66] Loading cluster: addons-491859
	I1101 08:34:32.357266   25701 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:34:32.357281   25701 addons.go:607] checking whether the cluster is paused
	I1101 08:34:32.357366   25701 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:34:32.357377   25701 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:34:32.357761   25701 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:34:32.375957   25701 ssh_runner.go:195] Run: systemctl --version
	I1101 08:34:32.376015   25701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:34:32.393471   25701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:34:32.493010   25701 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:34:32.493117   25701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:34:32.522443   25701 cri.go:89] found id: "80a16cf786091a2e00d759c7567a2f3a825a159621c6164140415e9898161757"
	I1101 08:34:32.522464   25701 cri.go:89] found id: "33e4e1fc1e33072f888a46aa17d3beb4e58f11877a9d27925e1e5d968eb6c903"
	I1101 08:34:32.522468   25701 cri.go:89] found id: "f04af0fd3a62dfc9f83c9abac1b7ccb6528a248e3fd3ee02ea1f2a7350778e83"
	I1101 08:34:32.522470   25701 cri.go:89] found id: "a1f3a49b7f394fcf06e3ec79eef028b31fb8f10e2d0269aa4eec27450086f2e9"
	I1101 08:34:32.522473   25701 cri.go:89] found id: "ff4bdf52bbb882d70c007d186f38568cb5286b9a2116e10107044414d1c422b0"
	I1101 08:34:32.522477   25701 cri.go:89] found id: "9d1050c081be96d28152bfd4e229378b4cc1d8c31d74f567fbc905b5e676cbe5"
	I1101 08:34:32.522479   25701 cri.go:89] found id: "2c81dda5dfe97017e1ea451a903bb723503013671dfd4ad2848dbd7ed4c00fda"
	I1101 08:34:32.522481   25701 cri.go:89] found id: "0f17b27c9fb94821e21590f954f59af583f7f28766b74bcf54fd77fd4403631f"
	I1101 08:34:32.522483   25701 cri.go:89] found id: "3070142e889654833dbabc836972d24ca0160e211e6a01dc410037b3d06aa377"
	I1101 08:34:32.522494   25701 cri.go:89] found id: "e0fe6aa919f9f7ec3e5dd5de78f0ba1c29746db4b58ff19fe034196dcb04a040"
	I1101 08:34:32.522497   25701 cri.go:89] found id: "dd32f839b496afac7e54669ede10e44b695513bd1f08cb2572d080421d76ed1f"
	I1101 08:34:32.522500   25701 cri.go:89] found id: "b8dc66998b8c65737a3fc68f94611d5a75e4841817858e50cf8f41fe3d0b9111"
	I1101 08:34:32.522503   25701 cri.go:89] found id: "c4c4e8392feed85ce6d8b52f77463bc2a8238dd093e730bd11ad824f180a3227"
	I1101 08:34:32.522505   25701 cri.go:89] found id: "a4c41d6f050f2ca6af53a5d7a6a54f2b04fb24731eca6d7272b14503b747f50d"
	I1101 08:34:32.522508   25701 cri.go:89] found id: "2b4413f8423a31353523e4d44f7675fac21836f4e3b491f3d3f19955b8251025"
	I1101 08:34:32.522512   25701 cri.go:89] found id: "73d495a359ef08303218d0bd2af8743a68b70af8ffdfadd49ac606f145b559b6"
	I1101 08:34:32.522515   25701 cri.go:89] found id: "18fc9837ab4ea8c07f85c79610c9eda88508e53a37801274e8022d17c69f1a98"
	I1101 08:34:32.522518   25701 cri.go:89] found id: "87757a0f68b4c84bb6b6a0830633d098f646e6d9b6aa521bccfaeb77b540635f"
	I1101 08:34:32.522520   25701 cri.go:89] found id: "f17c2b6b25fbc93674f08548b5309b9715e2cb6d7600ac5d6557505072b3fb3f"
	I1101 08:34:32.522522   25701 cri.go:89] found id: "c60507f296e9571f568638d8fac5f87c704242925cf7b4aa378db809c3e3176b"
	I1101 08:34:32.522527   25701 cri.go:89] found id: "4c1ad1a76dfd8c2d5f52abbb115ac02bc7871e82ffe8eed8b2bce0574ab65ce3"
	I1101 08:34:32.522530   25701 cri.go:89] found id: "808e84f4795d8f4c21c0316fc7e7437f547a0d653508d0419288257452ccf97b"
	I1101 08:34:32.522532   25701 cri.go:89] found id: "d4c72eaef44361ee96a4bd979c97bf60778beaa33b136b1407ada4c192a11240"
	I1101 08:34:32.522534   25701 cri.go:89] found id: "cdda903ada7547082c69c9584210b581ad9bfe62602052de013ddc3c59d043bc"
	I1101 08:34:32.522537   25701 cri.go:89] found id: "b29235edc538375f7d6a03229d64488db46c49997437218731a8cd64edc28444"
	I1101 08:34:32.522539   25701 cri.go:89] found id: ""
	I1101 08:34:32.522580   25701 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:34:32.537118   25701 out.go:203] 
	W1101 08:34:32.538275   25701 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:34:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:34:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:34:32.538294   25701 out.go:285] * 
	* 
	W1101 08:34:32.541330   25701 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:34:32.542671   25701 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-491859 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-491859 addons disable ingress --alsologtostderr -v=1: exit status 11 (248.06767ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:34:32.601403   25762 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:34:32.601561   25762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:34:32.601574   25762 out.go:374] Setting ErrFile to fd 2...
	I1101 08:34:32.601579   25762 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:34:32.601854   25762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:34:32.602256   25762 mustload.go:66] Loading cluster: addons-491859
	I1101 08:34:32.602769   25762 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:34:32.602789   25762 addons.go:607] checking whether the cluster is paused
	I1101 08:34:32.602906   25762 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:34:32.602922   25762 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:34:32.603313   25762 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:34:32.622441   25762 ssh_runner.go:195] Run: systemctl --version
	I1101 08:34:32.622497   25762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:34:32.640560   25762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:34:32.739941   25762 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:34:32.740012   25762 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:34:32.771631   25762 cri.go:89] found id: "80a16cf786091a2e00d759c7567a2f3a825a159621c6164140415e9898161757"
	I1101 08:34:32.771653   25762 cri.go:89] found id: "33e4e1fc1e33072f888a46aa17d3beb4e58f11877a9d27925e1e5d968eb6c903"
	I1101 08:34:32.771657   25762 cri.go:89] found id: "f04af0fd3a62dfc9f83c9abac1b7ccb6528a248e3fd3ee02ea1f2a7350778e83"
	I1101 08:34:32.771660   25762 cri.go:89] found id: "a1f3a49b7f394fcf06e3ec79eef028b31fb8f10e2d0269aa4eec27450086f2e9"
	I1101 08:34:32.771662   25762 cri.go:89] found id: "ff4bdf52bbb882d70c007d186f38568cb5286b9a2116e10107044414d1c422b0"
	I1101 08:34:32.771666   25762 cri.go:89] found id: "9d1050c081be96d28152bfd4e229378b4cc1d8c31d74f567fbc905b5e676cbe5"
	I1101 08:34:32.771669   25762 cri.go:89] found id: "2c81dda5dfe97017e1ea451a903bb723503013671dfd4ad2848dbd7ed4c00fda"
	I1101 08:34:32.771671   25762 cri.go:89] found id: "0f17b27c9fb94821e21590f954f59af583f7f28766b74bcf54fd77fd4403631f"
	I1101 08:34:32.771674   25762 cri.go:89] found id: "3070142e889654833dbabc836972d24ca0160e211e6a01dc410037b3d06aa377"
	I1101 08:34:32.771683   25762 cri.go:89] found id: "e0fe6aa919f9f7ec3e5dd5de78f0ba1c29746db4b58ff19fe034196dcb04a040"
	I1101 08:34:32.771686   25762 cri.go:89] found id: "dd32f839b496afac7e54669ede10e44b695513bd1f08cb2572d080421d76ed1f"
	I1101 08:34:32.771689   25762 cri.go:89] found id: "b8dc66998b8c65737a3fc68f94611d5a75e4841817858e50cf8f41fe3d0b9111"
	I1101 08:34:32.771691   25762 cri.go:89] found id: "c4c4e8392feed85ce6d8b52f77463bc2a8238dd093e730bd11ad824f180a3227"
	I1101 08:34:32.771693   25762 cri.go:89] found id: "a4c41d6f050f2ca6af53a5d7a6a54f2b04fb24731eca6d7272b14503b747f50d"
	I1101 08:34:32.771696   25762 cri.go:89] found id: "2b4413f8423a31353523e4d44f7675fac21836f4e3b491f3d3f19955b8251025"
	I1101 08:34:32.771700   25762 cri.go:89] found id: "73d495a359ef08303218d0bd2af8743a68b70af8ffdfadd49ac606f145b559b6"
	I1101 08:34:32.771703   25762 cri.go:89] found id: "18fc9837ab4ea8c07f85c79610c9eda88508e53a37801274e8022d17c69f1a98"
	I1101 08:34:32.771708   25762 cri.go:89] found id: "87757a0f68b4c84bb6b6a0830633d098f646e6d9b6aa521bccfaeb77b540635f"
	I1101 08:34:32.771710   25762 cri.go:89] found id: "f17c2b6b25fbc93674f08548b5309b9715e2cb6d7600ac5d6557505072b3fb3f"
	I1101 08:34:32.771712   25762 cri.go:89] found id: "c60507f296e9571f568638d8fac5f87c704242925cf7b4aa378db809c3e3176b"
	I1101 08:34:32.771717   25762 cri.go:89] found id: "4c1ad1a76dfd8c2d5f52abbb115ac02bc7871e82ffe8eed8b2bce0574ab65ce3"
	I1101 08:34:32.771719   25762 cri.go:89] found id: "808e84f4795d8f4c21c0316fc7e7437f547a0d653508d0419288257452ccf97b"
	I1101 08:34:32.771721   25762 cri.go:89] found id: "d4c72eaef44361ee96a4bd979c97bf60778beaa33b136b1407ada4c192a11240"
	I1101 08:34:32.771723   25762 cri.go:89] found id: "cdda903ada7547082c69c9584210b581ad9bfe62602052de013ddc3c59d043bc"
	I1101 08:34:32.771725   25762 cri.go:89] found id: "b29235edc538375f7d6a03229d64488db46c49997437218731a8cd64edc28444"
	I1101 08:34:32.771728   25762 cri.go:89] found id: ""
	I1101 08:34:32.771765   25762 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:34:32.785579   25762 out.go:203] 
	W1101 08:34:32.786735   25762 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:34:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:34:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:34:32.786759   25762 out.go:285] * 
	* 
	W1101 08:34:32.789828   25762 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:34:32.790982   25762 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-491859 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (148.58s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-ggvnk" [ad0c032e-b130-4f01-b6f0-3d86f934a833] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00356462s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-491859 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (250.520449ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:32:10.411319   22358 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:32:10.411474   22358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:10.411482   22358 out.go:374] Setting ErrFile to fd 2...
	I1101 08:32:10.411486   22358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:10.411705   22358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:32:10.412016   22358 mustload.go:66] Loading cluster: addons-491859
	I1101 08:32:10.412350   22358 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:10.412366   22358 addons.go:607] checking whether the cluster is paused
	I1101 08:32:10.412457   22358 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:10.412469   22358 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:32:10.412828   22358 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:32:10.430970   22358 ssh_runner.go:195] Run: systemctl --version
	I1101 08:32:10.431026   22358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:32:10.449961   22358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:32:10.550650   22358 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:32:10.550750   22358 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:32:10.581154   22358 cri.go:89] found id: "33e4e1fc1e33072f888a46aa17d3beb4e58f11877a9d27925e1e5d968eb6c903"
	I1101 08:32:10.581185   22358 cri.go:89] found id: "f04af0fd3a62dfc9f83c9abac1b7ccb6528a248e3fd3ee02ea1f2a7350778e83"
	I1101 08:32:10.581193   22358 cri.go:89] found id: "a1f3a49b7f394fcf06e3ec79eef028b31fb8f10e2d0269aa4eec27450086f2e9"
	I1101 08:32:10.581199   22358 cri.go:89] found id: "ff4bdf52bbb882d70c007d186f38568cb5286b9a2116e10107044414d1c422b0"
	I1101 08:32:10.581204   22358 cri.go:89] found id: "9d1050c081be96d28152bfd4e229378b4cc1d8c31d74f567fbc905b5e676cbe5"
	I1101 08:32:10.581210   22358 cri.go:89] found id: "2c81dda5dfe97017e1ea451a903bb723503013671dfd4ad2848dbd7ed4c00fda"
	I1101 08:32:10.581214   22358 cri.go:89] found id: "0f17b27c9fb94821e21590f954f59af583f7f28766b74bcf54fd77fd4403631f"
	I1101 08:32:10.581218   22358 cri.go:89] found id: "3070142e889654833dbabc836972d24ca0160e211e6a01dc410037b3d06aa377"
	I1101 08:32:10.581222   22358 cri.go:89] found id: "e0fe6aa919f9f7ec3e5dd5de78f0ba1c29746db4b58ff19fe034196dcb04a040"
	I1101 08:32:10.581235   22358 cri.go:89] found id: "dd32f839b496afac7e54669ede10e44b695513bd1f08cb2572d080421d76ed1f"
	I1101 08:32:10.581241   22358 cri.go:89] found id: "b8dc66998b8c65737a3fc68f94611d5a75e4841817858e50cf8f41fe3d0b9111"
	I1101 08:32:10.581243   22358 cri.go:89] found id: "c4c4e8392feed85ce6d8b52f77463bc2a8238dd093e730bd11ad824f180a3227"
	I1101 08:32:10.581246   22358 cri.go:89] found id: "a4c41d6f050f2ca6af53a5d7a6a54f2b04fb24731eca6d7272b14503b747f50d"
	I1101 08:32:10.581248   22358 cri.go:89] found id: "2b4413f8423a31353523e4d44f7675fac21836f4e3b491f3d3f19955b8251025"
	I1101 08:32:10.581251   22358 cri.go:89] found id: "73d495a359ef08303218d0bd2af8743a68b70af8ffdfadd49ac606f145b559b6"
	I1101 08:32:10.581263   22358 cri.go:89] found id: "18fc9837ab4ea8c07f85c79610c9eda88508e53a37801274e8022d17c69f1a98"
	I1101 08:32:10.581271   22358 cri.go:89] found id: "87757a0f68b4c84bb6b6a0830633d098f646e6d9b6aa521bccfaeb77b540635f"
	I1101 08:32:10.581277   22358 cri.go:89] found id: "f17c2b6b25fbc93674f08548b5309b9715e2cb6d7600ac5d6557505072b3fb3f"
	I1101 08:32:10.581282   22358 cri.go:89] found id: "c60507f296e9571f568638d8fac5f87c704242925cf7b4aa378db809c3e3176b"
	I1101 08:32:10.581286   22358 cri.go:89] found id: "4c1ad1a76dfd8c2d5f52abbb115ac02bc7871e82ffe8eed8b2bce0574ab65ce3"
	I1101 08:32:10.581293   22358 cri.go:89] found id: "808e84f4795d8f4c21c0316fc7e7437f547a0d653508d0419288257452ccf97b"
	I1101 08:32:10.581301   22358 cri.go:89] found id: "d4c72eaef44361ee96a4bd979c97bf60778beaa33b136b1407ada4c192a11240"
	I1101 08:32:10.581305   22358 cri.go:89] found id: "cdda903ada7547082c69c9584210b581ad9bfe62602052de013ddc3c59d043bc"
	I1101 08:32:10.581312   22358 cri.go:89] found id: "b29235edc538375f7d6a03229d64488db46c49997437218731a8cd64edc28444"
	I1101 08:32:10.581317   22358 cri.go:89] found id: ""
	I1101 08:32:10.581381   22358 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:32:10.596499   22358 out.go:203] 
	W1101 08:32:10.598060   22358 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:32:10.598082   22358 out.go:285] * 
	* 
	W1101 08:32:10.601110   22358 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:32:10.602372   22358 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-491859 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.33s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.036397ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-8j2pv" [8863ea1b-774d-469e-8487-d29ec16b131c] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002836126s
addons_test.go:463: (dbg) Run:  kubectl --context addons-491859 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-491859 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (257.877999ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:32:02.338905   20979 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:32:02.339065   20979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:02.339077   20979 out.go:374] Setting ErrFile to fd 2...
	I1101 08:32:02.339081   20979 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:02.339266   20979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:32:02.339533   20979 mustload.go:66] Loading cluster: addons-491859
	I1101 08:32:02.339914   20979 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:02.339929   20979 addons.go:607] checking whether the cluster is paused
	I1101 08:32:02.340014   20979 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:02.340026   20979 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:32:02.340389   20979 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:32:02.359512   20979 ssh_runner.go:195] Run: systemctl --version
	I1101 08:32:02.359582   20979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:32:02.378244   20979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:32:02.479020   20979 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:32:02.479104   20979 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:32:02.509974   20979 cri.go:89] found id: "33e4e1fc1e33072f888a46aa17d3beb4e58f11877a9d27925e1e5d968eb6c903"
	I1101 08:32:02.509996   20979 cri.go:89] found id: "f04af0fd3a62dfc9f83c9abac1b7ccb6528a248e3fd3ee02ea1f2a7350778e83"
	I1101 08:32:02.510000   20979 cri.go:89] found id: "a1f3a49b7f394fcf06e3ec79eef028b31fb8f10e2d0269aa4eec27450086f2e9"
	I1101 08:32:02.510003   20979 cri.go:89] found id: "ff4bdf52bbb882d70c007d186f38568cb5286b9a2116e10107044414d1c422b0"
	I1101 08:32:02.510007   20979 cri.go:89] found id: "9d1050c081be96d28152bfd4e229378b4cc1d8c31d74f567fbc905b5e676cbe5"
	I1101 08:32:02.510010   20979 cri.go:89] found id: "2c81dda5dfe97017e1ea451a903bb723503013671dfd4ad2848dbd7ed4c00fda"
	I1101 08:32:02.510012   20979 cri.go:89] found id: "0f17b27c9fb94821e21590f954f59af583f7f28766b74bcf54fd77fd4403631f"
	I1101 08:32:02.510014   20979 cri.go:89] found id: "3070142e889654833dbabc836972d24ca0160e211e6a01dc410037b3d06aa377"
	I1101 08:32:02.510017   20979 cri.go:89] found id: "e0fe6aa919f9f7ec3e5dd5de78f0ba1c29746db4b58ff19fe034196dcb04a040"
	I1101 08:32:02.510022   20979 cri.go:89] found id: "dd32f839b496afac7e54669ede10e44b695513bd1f08cb2572d080421d76ed1f"
	I1101 08:32:02.510026   20979 cri.go:89] found id: "b8dc66998b8c65737a3fc68f94611d5a75e4841817858e50cf8f41fe3d0b9111"
	I1101 08:32:02.510029   20979 cri.go:89] found id: "c4c4e8392feed85ce6d8b52f77463bc2a8238dd093e730bd11ad824f180a3227"
	I1101 08:32:02.510031   20979 cri.go:89] found id: "a4c41d6f050f2ca6af53a5d7a6a54f2b04fb24731eca6d7272b14503b747f50d"
	I1101 08:32:02.510034   20979 cri.go:89] found id: "2b4413f8423a31353523e4d44f7675fac21836f4e3b491f3d3f19955b8251025"
	I1101 08:32:02.510036   20979 cri.go:89] found id: "73d495a359ef08303218d0bd2af8743a68b70af8ffdfadd49ac606f145b559b6"
	I1101 08:32:02.510044   20979 cri.go:89] found id: "18fc9837ab4ea8c07f85c79610c9eda88508e53a37801274e8022d17c69f1a98"
	I1101 08:32:02.510047   20979 cri.go:89] found id: "87757a0f68b4c84bb6b6a0830633d098f646e6d9b6aa521bccfaeb77b540635f"
	I1101 08:32:02.510052   20979 cri.go:89] found id: "f17c2b6b25fbc93674f08548b5309b9715e2cb6d7600ac5d6557505072b3fb3f"
	I1101 08:32:02.510055   20979 cri.go:89] found id: "c60507f296e9571f568638d8fac5f87c704242925cf7b4aa378db809c3e3176b"
	I1101 08:32:02.510057   20979 cri.go:89] found id: "4c1ad1a76dfd8c2d5f52abbb115ac02bc7871e82ffe8eed8b2bce0574ab65ce3"
	I1101 08:32:02.510062   20979 cri.go:89] found id: "808e84f4795d8f4c21c0316fc7e7437f547a0d653508d0419288257452ccf97b"
	I1101 08:32:02.510067   20979 cri.go:89] found id: "d4c72eaef44361ee96a4bd979c97bf60778beaa33b136b1407ada4c192a11240"
	I1101 08:32:02.510070   20979 cri.go:89] found id: "cdda903ada7547082c69c9584210b581ad9bfe62602052de013ddc3c59d043bc"
	I1101 08:32:02.510072   20979 cri.go:89] found id: "b29235edc538375f7d6a03229d64488db46c49997437218731a8cd64edc28444"
	I1101 08:32:02.510075   20979 cri.go:89] found id: ""
	I1101 08:32:02.510112   20979 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:32:02.525396   20979 out.go:203] 
	W1101 08:32:02.526749   20979 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:32:02.526778   20979 out.go:285] * 
	* 
	W1101 08:32:02.529777   20979 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:32:02.531168   20979 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-491859 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.33s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.858929ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-491859 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-491859 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [6c1471f7-231e-4bdb-90c4-3ec2516cda30] Pending
helpers_test.go:352: "task-pv-pod" [6c1471f7-231e-4bdb-90c4-3ec2516cda30] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [6c1471f7-231e-4bdb-90c4-3ec2516cda30] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.00360089s
addons_test.go:572: (dbg) Run:  kubectl --context addons-491859 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-491859 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-491859 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-491859 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-491859 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-491859 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-491859 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [0f7fb92d-6302-49e0-8f4b-8df78487ef42] Pending
helpers_test.go:352: "task-pv-pod-restore" [0f7fb92d-6302-49e0-8f4b-8df78487ef42] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [0f7fb92d-6302-49e0-8f4b-8df78487ef42] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003940114s
addons_test.go:614: (dbg) Run:  kubectl --context addons-491859 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-491859 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-491859 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-491859 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (252.5347ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:32:48.339526   23476 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:32:48.339846   23476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:48.339872   23476 out.go:374] Setting ErrFile to fd 2...
	I1101 08:32:48.339878   23476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:48.340103   23476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:32:48.340383   23476 mustload.go:66] Loading cluster: addons-491859
	I1101 08:32:48.340731   23476 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:48.340746   23476 addons.go:607] checking whether the cluster is paused
	I1101 08:32:48.340828   23476 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:48.340839   23476 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:32:48.341250   23476 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:32:48.360552   23476 ssh_runner.go:195] Run: systemctl --version
	I1101 08:32:48.360609   23476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:32:48.379215   23476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:32:48.479943   23476 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:32:48.480008   23476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:32:48.510413   23476 cri.go:89] found id: "33e4e1fc1e33072f888a46aa17d3beb4e58f11877a9d27925e1e5d968eb6c903"
	I1101 08:32:48.510434   23476 cri.go:89] found id: "f04af0fd3a62dfc9f83c9abac1b7ccb6528a248e3fd3ee02ea1f2a7350778e83"
	I1101 08:32:48.510438   23476 cri.go:89] found id: "a1f3a49b7f394fcf06e3ec79eef028b31fb8f10e2d0269aa4eec27450086f2e9"
	I1101 08:32:48.510443   23476 cri.go:89] found id: "ff4bdf52bbb882d70c007d186f38568cb5286b9a2116e10107044414d1c422b0"
	I1101 08:32:48.510446   23476 cri.go:89] found id: "9d1050c081be96d28152bfd4e229378b4cc1d8c31d74f567fbc905b5e676cbe5"
	I1101 08:32:48.510449   23476 cri.go:89] found id: "2c81dda5dfe97017e1ea451a903bb723503013671dfd4ad2848dbd7ed4c00fda"
	I1101 08:32:48.510451   23476 cri.go:89] found id: "0f17b27c9fb94821e21590f954f59af583f7f28766b74bcf54fd77fd4403631f"
	I1101 08:32:48.510453   23476 cri.go:89] found id: "3070142e889654833dbabc836972d24ca0160e211e6a01dc410037b3d06aa377"
	I1101 08:32:48.510456   23476 cri.go:89] found id: "e0fe6aa919f9f7ec3e5dd5de78f0ba1c29746db4b58ff19fe034196dcb04a040"
	I1101 08:32:48.510467   23476 cri.go:89] found id: "dd32f839b496afac7e54669ede10e44b695513bd1f08cb2572d080421d76ed1f"
	I1101 08:32:48.510470   23476 cri.go:89] found id: "b8dc66998b8c65737a3fc68f94611d5a75e4841817858e50cf8f41fe3d0b9111"
	I1101 08:32:48.510473   23476 cri.go:89] found id: "c4c4e8392feed85ce6d8b52f77463bc2a8238dd093e730bd11ad824f180a3227"
	I1101 08:32:48.510475   23476 cri.go:89] found id: "a4c41d6f050f2ca6af53a5d7a6a54f2b04fb24731eca6d7272b14503b747f50d"
	I1101 08:32:48.510478   23476 cri.go:89] found id: "2b4413f8423a31353523e4d44f7675fac21836f4e3b491f3d3f19955b8251025"
	I1101 08:32:48.510481   23476 cri.go:89] found id: "73d495a359ef08303218d0bd2af8743a68b70af8ffdfadd49ac606f145b559b6"
	I1101 08:32:48.510487   23476 cri.go:89] found id: "18fc9837ab4ea8c07f85c79610c9eda88508e53a37801274e8022d17c69f1a98"
	I1101 08:32:48.510490   23476 cri.go:89] found id: "87757a0f68b4c84bb6b6a0830633d098f646e6d9b6aa521bccfaeb77b540635f"
	I1101 08:32:48.510494   23476 cri.go:89] found id: "f17c2b6b25fbc93674f08548b5309b9715e2cb6d7600ac5d6557505072b3fb3f"
	I1101 08:32:48.510497   23476 cri.go:89] found id: "c60507f296e9571f568638d8fac5f87c704242925cf7b4aa378db809c3e3176b"
	I1101 08:32:48.510499   23476 cri.go:89] found id: "4c1ad1a76dfd8c2d5f52abbb115ac02bc7871e82ffe8eed8b2bce0574ab65ce3"
	I1101 08:32:48.510501   23476 cri.go:89] found id: "808e84f4795d8f4c21c0316fc7e7437f547a0d653508d0419288257452ccf97b"
	I1101 08:32:48.510504   23476 cri.go:89] found id: "d4c72eaef44361ee96a4bd979c97bf60778beaa33b136b1407ada4c192a11240"
	I1101 08:32:48.510508   23476 cri.go:89] found id: "cdda903ada7547082c69c9584210b581ad9bfe62602052de013ddc3c59d043bc"
	I1101 08:32:48.510511   23476 cri.go:89] found id: "b29235edc538375f7d6a03229d64488db46c49997437218731a8cd64edc28444"
	I1101 08:32:48.510513   23476 cri.go:89] found id: ""
	I1101 08:32:48.510552   23476 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:32:48.525457   23476 out.go:203] 
	W1101 08:32:48.526962   23476 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:32:48.527002   23476 out.go:285] * 
	* 
	W1101 08:32:48.530105   23476 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:32:48.531591   23476 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-491859 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-491859 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (252.320101ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:32:48.592812   23536 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:32:48.592992   23536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:48.593003   23536 out.go:374] Setting ErrFile to fd 2...
	I1101 08:32:48.593008   23536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:48.593204   23536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:32:48.593456   23536 mustload.go:66] Loading cluster: addons-491859
	I1101 08:32:48.593806   23536 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:48.593820   23536 addons.go:607] checking whether the cluster is paused
	I1101 08:32:48.593912   23536 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:48.593924   23536 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:32:48.594307   23536 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:32:48.613329   23536 ssh_runner.go:195] Run: systemctl --version
	I1101 08:32:48.613387   23536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:32:48.632230   23536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:32:48.733146   23536 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:32:48.733240   23536 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:32:48.763845   23536 cri.go:89] found id: "33e4e1fc1e33072f888a46aa17d3beb4e58f11877a9d27925e1e5d968eb6c903"
	I1101 08:32:48.763894   23536 cri.go:89] found id: "f04af0fd3a62dfc9f83c9abac1b7ccb6528a248e3fd3ee02ea1f2a7350778e83"
	I1101 08:32:48.763901   23536 cri.go:89] found id: "a1f3a49b7f394fcf06e3ec79eef028b31fb8f10e2d0269aa4eec27450086f2e9"
	I1101 08:32:48.763906   23536 cri.go:89] found id: "ff4bdf52bbb882d70c007d186f38568cb5286b9a2116e10107044414d1c422b0"
	I1101 08:32:48.763910   23536 cri.go:89] found id: "9d1050c081be96d28152bfd4e229378b4cc1d8c31d74f567fbc905b5e676cbe5"
	I1101 08:32:48.763916   23536 cri.go:89] found id: "2c81dda5dfe97017e1ea451a903bb723503013671dfd4ad2848dbd7ed4c00fda"
	I1101 08:32:48.763920   23536 cri.go:89] found id: "0f17b27c9fb94821e21590f954f59af583f7f28766b74bcf54fd77fd4403631f"
	I1101 08:32:48.763924   23536 cri.go:89] found id: "3070142e889654833dbabc836972d24ca0160e211e6a01dc410037b3d06aa377"
	I1101 08:32:48.763927   23536 cri.go:89] found id: "e0fe6aa919f9f7ec3e5dd5de78f0ba1c29746db4b58ff19fe034196dcb04a040"
	I1101 08:32:48.763936   23536 cri.go:89] found id: "dd32f839b496afac7e54669ede10e44b695513bd1f08cb2572d080421d76ed1f"
	I1101 08:32:48.763943   23536 cri.go:89] found id: "b8dc66998b8c65737a3fc68f94611d5a75e4841817858e50cf8f41fe3d0b9111"
	I1101 08:32:48.763948   23536 cri.go:89] found id: "c4c4e8392feed85ce6d8b52f77463bc2a8238dd093e730bd11ad824f180a3227"
	I1101 08:32:48.763956   23536 cri.go:89] found id: "a4c41d6f050f2ca6af53a5d7a6a54f2b04fb24731eca6d7272b14503b747f50d"
	I1101 08:32:48.763960   23536 cri.go:89] found id: "2b4413f8423a31353523e4d44f7675fac21836f4e3b491f3d3f19955b8251025"
	I1101 08:32:48.763964   23536 cri.go:89] found id: "73d495a359ef08303218d0bd2af8743a68b70af8ffdfadd49ac606f145b559b6"
	I1101 08:32:48.763970   23536 cri.go:89] found id: "18fc9837ab4ea8c07f85c79610c9eda88508e53a37801274e8022d17c69f1a98"
	I1101 08:32:48.763977   23536 cri.go:89] found id: "87757a0f68b4c84bb6b6a0830633d098f646e6d9b6aa521bccfaeb77b540635f"
	I1101 08:32:48.763982   23536 cri.go:89] found id: "f17c2b6b25fbc93674f08548b5309b9715e2cb6d7600ac5d6557505072b3fb3f"
	I1101 08:32:48.763985   23536 cri.go:89] found id: "c60507f296e9571f568638d8fac5f87c704242925cf7b4aa378db809c3e3176b"
	I1101 08:32:48.763989   23536 cri.go:89] found id: "4c1ad1a76dfd8c2d5f52abbb115ac02bc7871e82ffe8eed8b2bce0574ab65ce3"
	I1101 08:32:48.763996   23536 cri.go:89] found id: "808e84f4795d8f4c21c0316fc7e7437f547a0d653508d0419288257452ccf97b"
	I1101 08:32:48.763998   23536 cri.go:89] found id: "d4c72eaef44361ee96a4bd979c97bf60778beaa33b136b1407ada4c192a11240"
	I1101 08:32:48.764002   23536 cri.go:89] found id: "cdda903ada7547082c69c9584210b581ad9bfe62602052de013ddc3c59d043bc"
	I1101 08:32:48.764005   23536 cri.go:89] found id: "b29235edc538375f7d6a03229d64488db46c49997437218731a8cd64edc28444"
	I1101 08:32:48.764009   23536 cri.go:89] found id: ""
	I1101 08:32:48.764058   23536 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:32:48.779040   23536 out.go:203] 
	W1101 08:32:48.780379   23536 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:32:48.780396   23536 out.go:285] * 
	* 
	W1101 08:32:48.783422   23536 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:32:48.784803   23536 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-491859 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (49.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-491859 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-491859 --alsologtostderr -v=1: exit status 11 (266.660336ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:31:51.019482   19415 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:31:51.019811   19415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:31:51.019822   19415 out.go:374] Setting ErrFile to fd 2...
	I1101 08:31:51.019827   19415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:31:51.020049   19415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:31:51.020357   19415 mustload.go:66] Loading cluster: addons-491859
	I1101 08:31:51.020742   19415 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:31:51.020759   19415 addons.go:607] checking whether the cluster is paused
	I1101 08:31:51.020846   19415 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:31:51.020858   19415 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:31:51.021293   19415 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:31:51.041403   19415 ssh_runner.go:195] Run: systemctl --version
	I1101 08:31:51.041487   19415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:31:51.060932   19415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:31:51.162560   19415 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:31:51.162649   19415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:31:51.194452   19415 cri.go:89] found id: "33e4e1fc1e33072f888a46aa17d3beb4e58f11877a9d27925e1e5d968eb6c903"
	I1101 08:31:51.194491   19415 cri.go:89] found id: "f04af0fd3a62dfc9f83c9abac1b7ccb6528a248e3fd3ee02ea1f2a7350778e83"
	I1101 08:31:51.194496   19415 cri.go:89] found id: "a1f3a49b7f394fcf06e3ec79eef028b31fb8f10e2d0269aa4eec27450086f2e9"
	I1101 08:31:51.194500   19415 cri.go:89] found id: "ff4bdf52bbb882d70c007d186f38568cb5286b9a2116e10107044414d1c422b0"
	I1101 08:31:51.194503   19415 cri.go:89] found id: "9d1050c081be96d28152bfd4e229378b4cc1d8c31d74f567fbc905b5e676cbe5"
	I1101 08:31:51.194506   19415 cri.go:89] found id: "2c81dda5dfe97017e1ea451a903bb723503013671dfd4ad2848dbd7ed4c00fda"
	I1101 08:31:51.194508   19415 cri.go:89] found id: "0f17b27c9fb94821e21590f954f59af583f7f28766b74bcf54fd77fd4403631f"
	I1101 08:31:51.194511   19415 cri.go:89] found id: "3070142e889654833dbabc836972d24ca0160e211e6a01dc410037b3d06aa377"
	I1101 08:31:51.194513   19415 cri.go:89] found id: "e0fe6aa919f9f7ec3e5dd5de78f0ba1c29746db4b58ff19fe034196dcb04a040"
	I1101 08:31:51.194518   19415 cri.go:89] found id: "dd32f839b496afac7e54669ede10e44b695513bd1f08cb2572d080421d76ed1f"
	I1101 08:31:51.194520   19415 cri.go:89] found id: "b8dc66998b8c65737a3fc68f94611d5a75e4841817858e50cf8f41fe3d0b9111"
	I1101 08:31:51.194522   19415 cri.go:89] found id: "c4c4e8392feed85ce6d8b52f77463bc2a8238dd093e730bd11ad824f180a3227"
	I1101 08:31:51.194526   19415 cri.go:89] found id: "a4c41d6f050f2ca6af53a5d7a6a54f2b04fb24731eca6d7272b14503b747f50d"
	I1101 08:31:51.194531   19415 cri.go:89] found id: "2b4413f8423a31353523e4d44f7675fac21836f4e3b491f3d3f19955b8251025"
	I1101 08:31:51.194535   19415 cri.go:89] found id: "73d495a359ef08303218d0bd2af8743a68b70af8ffdfadd49ac606f145b559b6"
	I1101 08:31:51.194545   19415 cri.go:89] found id: "18fc9837ab4ea8c07f85c79610c9eda88508e53a37801274e8022d17c69f1a98"
	I1101 08:31:51.194553   19415 cri.go:89] found id: "87757a0f68b4c84bb6b6a0830633d098f646e6d9b6aa521bccfaeb77b540635f"
	I1101 08:31:51.194559   19415 cri.go:89] found id: "f17c2b6b25fbc93674f08548b5309b9715e2cb6d7600ac5d6557505072b3fb3f"
	I1101 08:31:51.194563   19415 cri.go:89] found id: "c60507f296e9571f568638d8fac5f87c704242925cf7b4aa378db809c3e3176b"
	I1101 08:31:51.194567   19415 cri.go:89] found id: "4c1ad1a76dfd8c2d5f52abbb115ac02bc7871e82ffe8eed8b2bce0574ab65ce3"
	I1101 08:31:51.194571   19415 cri.go:89] found id: "808e84f4795d8f4c21c0316fc7e7437f547a0d653508d0419288257452ccf97b"
	I1101 08:31:51.194575   19415 cri.go:89] found id: "d4c72eaef44361ee96a4bd979c97bf60778beaa33b136b1407ada4c192a11240"
	I1101 08:31:51.194579   19415 cri.go:89] found id: "cdda903ada7547082c69c9584210b581ad9bfe62602052de013ddc3c59d043bc"
	I1101 08:31:51.194584   19415 cri.go:89] found id: "b29235edc538375f7d6a03229d64488db46c49997437218731a8cd64edc28444"
	I1101 08:31:51.194587   19415 cri.go:89] found id: ""
	I1101 08:31:51.194637   19415 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:31:51.211177   19415 out.go:203] 
	W1101 08:31:51.212676   19415 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:31:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:31:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:31:51.212700   19415 out.go:285] * 
	* 
	W1101 08:31:51.217128   19415 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:31:51.218464   19415 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-491859 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-491859
helpers_test.go:243: (dbg) docker inspect addons-491859:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "227e011c4d635e86a7d98338cfbc60ccc8e82d06e889105c06607437284225aa",
	        "Created": "2025-11-01T08:29:42.519506733Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11399,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T08:29:42.556344662Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/227e011c4d635e86a7d98338cfbc60ccc8e82d06e889105c06607437284225aa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/227e011c4d635e86a7d98338cfbc60ccc8e82d06e889105c06607437284225aa/hostname",
	        "HostsPath": "/var/lib/docker/containers/227e011c4d635e86a7d98338cfbc60ccc8e82d06e889105c06607437284225aa/hosts",
	        "LogPath": "/var/lib/docker/containers/227e011c4d635e86a7d98338cfbc60ccc8e82d06e889105c06607437284225aa/227e011c4d635e86a7d98338cfbc60ccc8e82d06e889105c06607437284225aa-json.log",
	        "Name": "/addons-491859",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-491859:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-491859",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "227e011c4d635e86a7d98338cfbc60ccc8e82d06e889105c06607437284225aa",
	                "LowerDir": "/var/lib/docker/overlay2/10ecdaf89aff152dafb69a1872c98f770f95a3e681dcd3228c2161ebabf3576e-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/10ecdaf89aff152dafb69a1872c98f770f95a3e681dcd3228c2161ebabf3576e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/10ecdaf89aff152dafb69a1872c98f770f95a3e681dcd3228c2161ebabf3576e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/10ecdaf89aff152dafb69a1872c98f770f95a3e681dcd3228c2161ebabf3576e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-491859",
	                "Source": "/var/lib/docker/volumes/addons-491859/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-491859",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-491859",
	                "name.minikube.sigs.k8s.io": "addons-491859",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "779f67121a96e6e16b406069fd940327ec18a62b70819f756c12dfbb3b10eed1",
	            "SandboxKey": "/var/run/docker/netns/779f67121a96",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-491859": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:09:99:0c:7d:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a483d68a9f1df4d48125180490b80e16279db89452ff7e0302439e525714351",
	                    "EndpointID": "46e97eb58d018377da131eed143ec60fb3017b97553f644bd343d2d30f74a16d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-491859",
	                        "227e011c4d63"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-491859 -n addons-491859
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-491859 logs -n 25: (1.332352548s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-578604 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-578604   │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-578604                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-578604   │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-292520 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-292520   │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-292520                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-292520   │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-578604                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-578604   │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-292520                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-292520   │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ start   │ --download-only -p download-docker-005011 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-005011 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ delete  │ -p download-docker-005011                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-005011 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-218549 --alsologtostderr --binary-mirror http://127.0.0.1:39227 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-218549   │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-218549                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-218549   │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ addons  │ enable dashboard -p addons-491859                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-491859          │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-491859                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-491859          │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ start   │ -p addons-491859 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-491859          │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:31 UTC │
	│ addons  │ addons-491859 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-491859          │ jenkins │ v1.37.0 │ 01 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-491859 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-491859          │ jenkins │ v1.37.0 │ 01 Nov 25 08:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-491859 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-491859          │ jenkins │ v1.37.0 │ 01 Nov 25 08:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:29:18
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:29:18.395686   10731 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:29:18.395821   10731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:18.395832   10731 out.go:374] Setting ErrFile to fd 2...
	I1101 08:29:18.395836   10731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:18.396084   10731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:29:18.396625   10731 out.go:368] Setting JSON to false
	I1101 08:29:18.397562   10731 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":706,"bootTime":1761985052,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:29:18.397656   10731 start.go:143] virtualization: kvm guest
	I1101 08:29:18.399672   10731 out.go:179] * [addons-491859] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 08:29:18.401439   10731 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 08:29:18.401488   10731 notify.go:221] Checking for updates...
	I1101 08:29:18.404241   10731 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:29:18.405465   10731 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 08:29:18.406814   10731 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 08:29:18.408143   10731 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 08:29:18.409539   10731 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:29:18.411402   10731 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:29:18.436678   10731 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 08:29:18.436815   10731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:18.491880   10731 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-01 08:29:18.482006307 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:29:18.492017   10731 docker.go:319] overlay module found
	I1101 08:29:18.494000   10731 out.go:179] * Using the docker driver based on user configuration
	I1101 08:29:18.495094   10731 start.go:309] selected driver: docker
	I1101 08:29:18.495110   10731 start.go:930] validating driver "docker" against <nil>
	I1101 08:29:18.495122   10731 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:29:18.495681   10731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:18.553706   10731 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-01 08:29:18.544541725 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:29:18.553848   10731 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:29:18.554100   10731 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:29:18.555995   10731 out.go:179] * Using Docker driver with root privileges
	I1101 08:29:18.557517   10731 cni.go:84] Creating CNI manager for ""
	I1101 08:29:18.557586   10731 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:29:18.557596   10731 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 08:29:18.557666   10731 start.go:353] cluster config:
	{Name:addons-491859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-491859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1101 08:29:18.559147   10731 out.go:179] * Starting "addons-491859" primary control-plane node in "addons-491859" cluster
	I1101 08:29:18.560143   10731 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 08:29:18.561413   10731 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 08:29:18.562561   10731 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:29:18.562598   10731 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 08:29:18.562609   10731 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 08:29:18.562621   10731 cache.go:59] Caching tarball of preloaded images
	I1101 08:29:18.562706   10731 preload.go:233] Found /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 08:29:18.562719   10731 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 08:29:18.563075   10731 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/config.json ...
	I1101 08:29:18.563105   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/config.json: {Name:mke52046cdc175d21920b9af0bb0df87c10485c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:18.580276   10731 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 08:29:18.580409   10731 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 08:29:18.580430   10731 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 08:29:18.580437   10731 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 08:29:18.580449   10731 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 08:29:18.580456   10731 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1101 08:29:30.886343   10731 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1101 08:29:30.886382   10731 cache.go:233] Successfully downloaded all kic artifacts
	I1101 08:29:30.886414   10731 start.go:360] acquireMachinesLock for addons-491859: {Name:mk68f33aa39dc4a1fa1cf6d283fdb1adb54191e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 08:29:30.886530   10731 start.go:364] duration metric: took 89.954µs to acquireMachinesLock for "addons-491859"
	I1101 08:29:30.886555   10731 start.go:93] Provisioning new machine with config: &{Name:addons-491859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-491859 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:29:30.886624   10731 start.go:125] createHost starting for "" (driver="docker")
	I1101 08:29:30.888467   10731 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1101 08:29:30.888693   10731 start.go:159] libmachine.API.Create for "addons-491859" (driver="docker")
	I1101 08:29:30.888723   10731 client.go:173] LocalClient.Create starting
	I1101 08:29:30.888847   10731 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem
	I1101 08:29:31.180353   10731 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem
	I1101 08:29:31.235331   10731 cli_runner.go:164] Run: docker network inspect addons-491859 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 08:29:31.252265   10731 cli_runner.go:211] docker network inspect addons-491859 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 08:29:31.252359   10731 network_create.go:284] running [docker network inspect addons-491859] to gather additional debugging logs...
	I1101 08:29:31.252378   10731 cli_runner.go:164] Run: docker network inspect addons-491859
	W1101 08:29:31.269367   10731 cli_runner.go:211] docker network inspect addons-491859 returned with exit code 1
	I1101 08:29:31.269400   10731 network_create.go:287] error running [docker network inspect addons-491859]: docker network inspect addons-491859: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-491859 not found
	I1101 08:29:31.269411   10731 network_create.go:289] output of [docker network inspect addons-491859]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-491859 not found
	
	** /stderr **
	I1101 08:29:31.269518   10731 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 08:29:31.286815   10731 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016f7d10}
	I1101 08:29:31.286880   10731 network_create.go:124] attempt to create docker network addons-491859 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 08:29:31.286933   10731 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-491859 addons-491859
	I1101 08:29:31.346236   10731 network_create.go:108] docker network addons-491859 192.168.49.0/24 created
	I1101 08:29:31.346287   10731 kic.go:121] calculated static IP "192.168.49.2" for the "addons-491859" container
	I1101 08:29:31.346356   10731 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 08:29:31.364286   10731 cli_runner.go:164] Run: docker volume create addons-491859 --label name.minikube.sigs.k8s.io=addons-491859 --label created_by.minikube.sigs.k8s.io=true
	I1101 08:29:31.382821   10731 oci.go:103] Successfully created a docker volume addons-491859
	I1101 08:29:31.382916   10731 cli_runner.go:164] Run: docker run --rm --name addons-491859-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-491859 --entrypoint /usr/bin/test -v addons-491859:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 08:29:38.032982   10731 cli_runner.go:217] Completed: docker run --rm --name addons-491859-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-491859 --entrypoint /usr/bin/test -v addons-491859:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (6.650026929s)
	I1101 08:29:38.033010   10731 oci.go:107] Successfully prepared a docker volume addons-491859
	I1101 08:29:38.033029   10731 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:29:38.033048   10731 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 08:29:38.033126   10731 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-491859:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 08:29:42.448129   10731 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-491859:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.414960684s)
	I1101 08:29:42.448157   10731 kic.go:203] duration metric: took 4.415105637s to extract preloaded images to volume ...
	W1101 08:29:42.448275   10731 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 08:29:42.448309   10731 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 08:29:42.448352   10731 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 08:29:42.503446   10731 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-491859 --name addons-491859 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-491859 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-491859 --network addons-491859 --ip 192.168.49.2 --volume addons-491859:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 08:29:42.814516   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Running}}
	I1101 08:29:42.835696   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:29:42.854515   10731 cli_runner.go:164] Run: docker exec addons-491859 stat /var/lib/dpkg/alternatives/iptables
	I1101 08:29:42.909716   10731 oci.go:144] the created container "addons-491859" has a running status.
	I1101 08:29:42.909786   10731 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa...
	I1101 08:29:43.135081   10731 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 08:29:43.170376   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:29:43.191335   10731 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 08:29:43.191357   10731 kic_runner.go:114] Args: [docker exec --privileged addons-491859 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 08:29:43.240422   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:29:43.261938   10731 machine.go:94] provisionDockerMachine start ...
	I1101 08:29:43.262057   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:29:43.281484   10731 main.go:143] libmachine: Using SSH client type: native
	I1101 08:29:43.281775   10731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 08:29:43.281794   10731 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 08:29:43.425747   10731 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-491859
	
	I1101 08:29:43.425777   10731 ubuntu.go:182] provisioning hostname "addons-491859"
	I1101 08:29:43.425836   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:29:43.444569   10731 main.go:143] libmachine: Using SSH client type: native
	I1101 08:29:43.444850   10731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 08:29:43.444886   10731 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-491859 && echo "addons-491859" | sudo tee /etc/hostname
	I1101 08:29:43.597299   10731 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-491859
	
	I1101 08:29:43.597387   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:29:43.615147   10731 main.go:143] libmachine: Using SSH client type: native
	I1101 08:29:43.615387   10731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 08:29:43.615407   10731 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-491859' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-491859/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-491859' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 08:29:43.756412   10731 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 08:29:43.756438   10731 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5913/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5913/.minikube}
	I1101 08:29:43.756480   10731 ubuntu.go:190] setting up certificates
	I1101 08:29:43.756494   10731 provision.go:84] configureAuth start
	I1101 08:29:43.756548   10731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-491859
	I1101 08:29:43.774542   10731 provision.go:143] copyHostCerts
	I1101 08:29:43.774626   10731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem (1078 bytes)
	I1101 08:29:43.774741   10731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem (1123 bytes)
	I1101 08:29:43.774803   10731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem (1675 bytes)
	I1101 08:29:43.774855   10731 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem org=jenkins.addons-491859 san=[127.0.0.1 192.168.49.2 addons-491859 localhost minikube]
	I1101 08:29:44.000398   10731 provision.go:177] copyRemoteCerts
	I1101 08:29:44.000455   10731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 08:29:44.000491   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:29:44.018425   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:29:44.119221   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 08:29:44.138425   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 08:29:44.156400   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 08:29:44.174359   10731 provision.go:87] duration metric: took 417.851925ms to configureAuth
	I1101 08:29:44.174388   10731 ubuntu.go:206] setting minikube options for container-runtime
	I1101 08:29:44.174582   10731 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:29:44.174696   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:29:44.192685   10731 main.go:143] libmachine: Using SSH client type: native
	I1101 08:29:44.192995   10731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 08:29:44.193022   10731 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 08:29:44.447606   10731 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 08:29:44.447631   10731 machine.go:97] duration metric: took 1.185659743s to provisionDockerMachine
	I1101 08:29:44.447644   10731 client.go:176] duration metric: took 13.558912809s to LocalClient.Create
	I1101 08:29:44.447671   10731 start.go:167] duration metric: took 13.558978318s to libmachine.API.Create "addons-491859"
	I1101 08:29:44.447680   10731 start.go:293] postStartSetup for "addons-491859" (driver="docker")
	I1101 08:29:44.447693   10731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 08:29:44.447752   10731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 08:29:44.447791   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:29:44.465841   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:29:44.568433   10731 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 08:29:44.572471   10731 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 08:29:44.572498   10731 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 08:29:44.572521   10731 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/addons for local assets ...
	I1101 08:29:44.572588   10731 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/files for local assets ...
	I1101 08:29:44.572614   10731 start.go:296] duration metric: took 124.92709ms for postStartSetup
	I1101 08:29:44.572955   10731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-491859
	I1101 08:29:44.591464   10731 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/config.json ...
	I1101 08:29:44.591728   10731 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:29:44.591766   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:29:44.611217   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:29:44.709102   10731 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 08:29:44.713766   10731 start.go:128] duration metric: took 13.827130547s to createHost
	I1101 08:29:44.713794   10731 start.go:83] releasing machines lock for "addons-491859", held for 13.827250706s
	I1101 08:29:44.713882   10731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-491859
	I1101 08:29:44.733704   10731 ssh_runner.go:195] Run: cat /version.json
	I1101 08:29:44.733759   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:29:44.733786   10731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 08:29:44.733841   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:29:44.753676   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:29:44.754517   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:29:44.849952   10731 ssh_runner.go:195] Run: systemctl --version
	I1101 08:29:44.904440   10731 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 08:29:44.941511   10731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 08:29:44.946240   10731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 08:29:44.946308   10731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 08:29:44.972407   10731 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 08:29:44.972434   10731 start.go:496] detecting cgroup driver to use...
	I1101 08:29:44.972462   10731 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 08:29:44.972500   10731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 08:29:44.988367   10731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 08:29:45.001094   10731 docker.go:218] disabling cri-docker service (if available) ...
	I1101 08:29:45.001157   10731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 08:29:45.017747   10731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 08:29:45.035550   10731 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 08:29:45.113377   10731 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 08:29:45.198918   10731 docker.go:234] disabling docker service ...
	I1101 08:29:45.198974   10731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 08:29:45.217439   10731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 08:29:45.230101   10731 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 08:29:45.312169   10731 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 08:29:45.393451   10731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 08:29:45.406098   10731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 08:29:45.420630   10731 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 08:29:45.420694   10731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:45.431355   10731 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 08:29:45.431426   10731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:45.440760   10731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:45.449985   10731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:45.459096   10731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 08:29:45.467588   10731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:45.476897   10731 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:45.490608   10731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:45.499210   10731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 08:29:45.506911   10731 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 08:29:45.506971   10731 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 08:29:45.519650   10731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 08:29:45.527667   10731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:29:45.606421   10731 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 08:29:45.708452   10731 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 08:29:45.708537   10731 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 08:29:45.712592   10731 start.go:564] Will wait 60s for crictl version
	I1101 08:29:45.712643   10731 ssh_runner.go:195] Run: which crictl
	I1101 08:29:45.716286   10731 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 08:29:45.741302   10731 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 08:29:45.741413   10731 ssh_runner.go:195] Run: crio --version
	I1101 08:29:45.768032   10731 ssh_runner.go:195] Run: crio --version
	I1101 08:29:45.798083   10731 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 08:29:45.799228   10731 cli_runner.go:164] Run: docker network inspect addons-491859 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 08:29:45.816845   10731 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 08:29:45.821062   10731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:29:45.831675   10731 kubeadm.go:884] updating cluster {Name:addons-491859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-491859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 08:29:45.831843   10731 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:29:45.831917   10731 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:29:45.865293   10731 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 08:29:45.865315   10731 crio.go:433] Images already preloaded, skipping extraction
	I1101 08:29:45.865364   10731 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:29:45.891285   10731 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 08:29:45.891306   10731 cache_images.go:86] Images are preloaded, skipping loading
	I1101 08:29:45.891315   10731 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 08:29:45.891413   10731 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-491859 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-491859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 08:29:45.891486   10731 ssh_runner.go:195] Run: crio config
	I1101 08:29:45.936476   10731 cni.go:84] Creating CNI manager for ""
	I1101 08:29:45.936502   10731 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:29:45.936523   10731 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 08:29:45.936544   10731 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-491859 NodeName:addons-491859 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 08:29:45.936665   10731 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-491859"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 08:29:45.936725   10731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 08:29:45.945444   10731 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 08:29:45.945521   10731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 08:29:45.953729   10731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 08:29:45.967053   10731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 08:29:45.983566   10731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1101 08:29:45.997069   10731 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 08:29:46.000903   10731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:29:46.011598   10731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:29:46.091901   10731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:29:46.116472   10731 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859 for IP: 192.168.49.2
	I1101 08:29:46.116499   10731 certs.go:195] generating shared ca certs ...
	I1101 08:29:46.116515   10731 certs.go:227] acquiring lock for ca certs: {Name:mkfdee6a84670347521013ebeef165551380cb9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:46.116646   10731 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key
	I1101 08:29:46.259033   10731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt ...
	I1101 08:29:46.259063   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt: {Name:mk4bf3995d5d0f4fef38f99e080776cf96bc48cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:46.259225   10731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key ...
	I1101 08:29:46.259236   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key: {Name:mkd1f675dd286f2d5b71c8b39a4614cd145027a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:46.259325   10731 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key
	I1101 08:29:46.470101   10731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.crt ...
	I1101 08:29:46.470136   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.crt: {Name:mk4d5edb6e3284aedb960a5d17b6874006117575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:46.470312   10731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key ...
	I1101 08:29:46.470322   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key: {Name:mkdbbe1554f606cb64b651fbfe7fb2d808191132 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:46.470397   10731 certs.go:257] generating profile certs ...
	I1101 08:29:46.470482   10731 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.key
	I1101 08:29:46.470506   10731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt with IP's: []
	I1101 08:29:46.631245   10731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt ...
	I1101 08:29:46.631280   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: {Name:mkad9c6537b618eb28e78c59039c41f01bf0b157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:46.631456   10731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.key ...
	I1101 08:29:46.631467   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.key: {Name:mk71bc388ac6118a79f3338cab825b3d9b05a13f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:46.631543   10731 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.key.54d41853
	I1101 08:29:46.631561   10731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.crt.54d41853 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1101 08:29:47.056564   10731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.crt.54d41853 ...
	I1101 08:29:47.056601   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.crt.54d41853: {Name:mka7b039c670b14a7a31317583752fd87a0fd045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:47.056772   10731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.key.54d41853 ...
	I1101 08:29:47.056785   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.key.54d41853: {Name:mke7fad54183e66065994d5454419195014552ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:47.056878   10731 certs.go:382] copying /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.crt.54d41853 -> /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.crt
	I1101 08:29:47.056960   10731 certs.go:386] copying /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.key.54d41853 -> /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.key
	I1101 08:29:47.057010   10731 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/proxy-client.key
	I1101 08:29:47.057029   10731 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/proxy-client.crt with IP's: []
	I1101 08:29:47.316919   10731 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/proxy-client.crt ...
	I1101 08:29:47.316951   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/proxy-client.crt: {Name:mk04669aab90e96b4612effdbd0c5217954f9ad6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:47.317125   10731 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/proxy-client.key ...
	I1101 08:29:47.317137   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/proxy-client.key: {Name:mk921078f929be7c707b6c61cfb161c2d07cd92c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:47.317339   10731 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 08:29:47.317377   10731 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem (1078 bytes)
	I1101 08:29:47.317397   10731 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem (1123 bytes)
	I1101 08:29:47.317415   10731 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem (1675 bytes)
	I1101 08:29:47.317987   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 08:29:47.336457   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 08:29:47.354326   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 08:29:47.372213   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 08:29:47.390015   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 08:29:47.408600   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 08:29:47.427246   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 08:29:47.445950   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 08:29:47.464227   10731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 08:29:47.484996   10731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 08:29:47.498119   10731 ssh_runner.go:195] Run: openssl version
	I1101 08:29:47.504350   10731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 08:29:47.515795   10731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:29:47.519757   10731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:29:47.519849   10731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:29:47.553830   10731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 08:29:47.563614   10731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 08:29:47.567597   10731 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 08:29:47.567662   10731 kubeadm.go:401] StartCluster: {Name:addons-491859 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-491859 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:29:47.567745   10731 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:29:47.567795   10731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:29:47.596594   10731 cri.go:89] found id: ""
	I1101 08:29:47.596673   10731 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 08:29:47.605319   10731 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 08:29:47.613740   10731 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 08:29:47.613791   10731 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 08:29:47.622171   10731 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 08:29:47.622204   10731 kubeadm.go:158] found existing configuration files:
	
	I1101 08:29:47.622253   10731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 08:29:47.630497   10731 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 08:29:47.630562   10731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 08:29:47.638605   10731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 08:29:47.646770   10731 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 08:29:47.646828   10731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 08:29:47.654629   10731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 08:29:47.662566   10731 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 08:29:47.662631   10731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 08:29:47.670809   10731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 08:29:47.679951   10731 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 08:29:47.680031   10731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 08:29:47.688660   10731 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 08:29:47.726643   10731 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 08:29:47.726697   10731 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 08:29:47.748533   10731 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 08:29:47.748608   10731 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 08:29:47.748670   10731 kubeadm.go:319] OS: Linux
	I1101 08:29:47.748756   10731 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 08:29:47.748815   10731 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 08:29:47.748859   10731 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 08:29:47.748936   10731 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 08:29:47.748982   10731 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 08:29:47.749023   10731 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 08:29:47.749097   10731 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 08:29:47.749163   10731 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 08:29:47.805000   10731 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 08:29:47.805091   10731 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 08:29:47.805196   10731 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 08:29:47.811990   10731 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 08:29:47.813963   10731 out.go:252]   - Generating certificates and keys ...
	I1101 08:29:47.814047   10731 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 08:29:47.814148   10731 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 08:29:47.942661   10731 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 08:29:48.258914   10731 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 08:29:48.892215   10731 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 08:29:49.143224   10731 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 08:29:49.531503   10731 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 08:29:49.531665   10731 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-491859 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 08:29:49.879321   10731 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 08:29:49.879473   10731 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-491859 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 08:29:50.262434   10731 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 08:29:50.378287   10731 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 08:29:50.604682   10731 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 08:29:50.604768   10731 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 08:29:51.288241   10731 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 08:29:51.427432   10731 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 08:29:51.661821   10731 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 08:29:51.718850   10731 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 08:29:51.976623   10731 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 08:29:51.977051   10731 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 08:29:51.980988   10731 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 08:29:51.982657   10731 out.go:252]   - Booting up control plane ...
	I1101 08:29:51.982752   10731 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 08:29:51.982825   10731 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 08:29:51.983140   10731 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 08:29:51.996841   10731 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 08:29:51.996985   10731 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 08:29:52.004102   10731 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 08:29:52.005057   10731 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 08:29:52.005151   10731 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 08:29:52.102302   10731 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 08:29:52.102463   10731 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 08:29:53.103073   10731 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000895763s
	I1101 08:29:53.105977   10731 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 08:29:53.106121   10731 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1101 08:29:53.106227   10731 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 08:29:53.106304   10731 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 08:29:54.660126   10731 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.55416674s
	I1101 08:29:56.077706   10731 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.971638524s
	I1101 08:29:56.607587   10731 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501591806s
	I1101 08:29:56.618695   10731 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 08:29:56.628394   10731 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 08:29:56.637122   10731 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 08:29:56.637356   10731 kubeadm.go:319] [mark-control-plane] Marking the node addons-491859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 08:29:56.644796   10731 kubeadm.go:319] [bootstrap-token] Using token: wo1v43.v1n7lssssb2gwy0c
	I1101 08:29:56.646016   10731 out.go:252]   - Configuring RBAC rules ...
	I1101 08:29:56.646176   10731 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 08:29:56.651320   10731 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 08:29:56.656416   10731 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 08:29:56.658758   10731 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 08:29:56.661186   10731 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 08:29:56.664533   10731 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 08:29:57.013451   10731 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 08:29:57.429805   10731 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 08:29:58.014521   10731 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 08:29:58.015314   10731 kubeadm.go:319] 
	I1101 08:29:58.015376   10731 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 08:29:58.015408   10731 kubeadm.go:319] 
	I1101 08:29:58.015596   10731 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 08:29:58.015620   10731 kubeadm.go:319] 
	I1101 08:29:58.015660   10731 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 08:29:58.015737   10731 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 08:29:58.015815   10731 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 08:29:58.015825   10731 kubeadm.go:319] 
	I1101 08:29:58.015944   10731 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 08:29:58.015964   10731 kubeadm.go:319] 
	I1101 08:29:58.016044   10731 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 08:29:58.016054   10731 kubeadm.go:319] 
	I1101 08:29:58.016128   10731 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 08:29:58.016250   10731 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 08:29:58.016340   10731 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 08:29:58.016351   10731 kubeadm.go:319] 
	I1101 08:29:58.016479   10731 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 08:29:58.016588   10731 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 08:29:58.016622   10731 kubeadm.go:319] 
	I1101 08:29:58.016759   10731 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token wo1v43.v1n7lssssb2gwy0c \
	I1101 08:29:58.016934   10731 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 \
	I1101 08:29:58.016967   10731 kubeadm.go:319] 	--control-plane 
	I1101 08:29:58.016991   10731 kubeadm.go:319] 
	I1101 08:29:58.017124   10731 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 08:29:58.017138   10731 kubeadm.go:319] 
	I1101 08:29:58.017265   10731 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token wo1v43.v1n7lssssb2gwy0c \
	I1101 08:29:58.017402   10731 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 
	I1101 08:29:58.019000   10731 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 08:29:58.019147   10731 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 08:29:58.019180   10731 cni.go:84] Creating CNI manager for ""
	I1101 08:29:58.019194   10731 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:29:58.020819   10731 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 08:29:58.022079   10731 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 08:29:58.026366   10731 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 08:29:58.026382   10731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 08:29:58.039818   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 08:29:58.239103   10731 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 08:29:58.239193   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:29:58.239219   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-491859 minikube.k8s.io/updated_at=2025_11_01T08_29_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=addons-491859 minikube.k8s.io/primary=true
	I1101 08:29:58.314884   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:29:58.314938   10731 ops.go:34] apiserver oom_adj: -16
	I1101 08:29:58.815245   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:29:59.315706   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:29:59.815551   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:00.315568   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:00.816002   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:01.315085   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:01.815408   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:02.315961   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:02.815934   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:03.315654   10731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:03.390965   10731 kubeadm.go:1114] duration metric: took 5.151843777s to wait for elevateKubeSystemPrivileges
	I1101 08:30:03.391002   10731 kubeadm.go:403] duration metric: took 15.823344629s to StartCluster
	I1101 08:30:03.391022   10731 settings.go:142] acquiring lock: {Name:mkb1ba7d0d4bb15f3f0746ce486d72703f901580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:03.391147   10731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 08:30:03.391707   10731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:03.391948   10731 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 08:30:03.392021   10731 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:30:03.392053   10731 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 08:30:03.392190   10731 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:30:03.392229   10731 addons.go:70] Setting default-storageclass=true in profile "addons-491859"
	I1101 08:30:03.392231   10731 addons.go:70] Setting yakd=true in profile "addons-491859"
	I1101 08:30:03.392242   10731 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-491859"
	I1101 08:30:03.392253   10731 addons.go:239] Setting addon yakd=true in "addons-491859"
	I1101 08:30:03.392275   10731 addons.go:70] Setting gcp-auth=true in profile "addons-491859"
	I1101 08:30:03.392307   10731 mustload.go:66] Loading cluster: addons-491859
	I1101 08:30:03.392308   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.392552   10731 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:30:03.392682   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.392812   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.392809   10731 addons.go:70] Setting registry=true in profile "addons-491859"
	I1101 08:30:03.392886   10731 addons.go:239] Setting addon registry=true in "addons-491859"
	I1101 08:30:03.392901   10731 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-491859"
	I1101 08:30:03.392915   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.392923   10731 addons.go:70] Setting cloud-spanner=true in profile "addons-491859"
	I1101 08:30:03.392933   10731 addons.go:239] Setting addon cloud-spanner=true in "addons-491859"
	I1101 08:30:03.392948   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.392958   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.393216   10731 addons.go:70] Setting inspektor-gadget=true in profile "addons-491859"
	I1101 08:30:03.393251   10731 addons.go:239] Setting addon inspektor-gadget=true in "addons-491859"
	I1101 08:30:03.393276   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.393509   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.393567   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.393750   10731 addons.go:70] Setting volcano=true in profile "addons-491859"
	I1101 08:30:03.393844   10731 addons.go:239] Setting addon volcano=true in "addons-491859"
	I1101 08:30:03.393940   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.394111   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.394308   10731 addons.go:70] Setting metrics-server=true in profile "addons-491859"
	I1101 08:30:03.394327   10731 addons.go:239] Setting addon metrics-server=true in "addons-491859"
	I1101 08:30:03.394349   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.394639   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.394807   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.395199   10731 addons.go:70] Setting volumesnapshots=true in profile "addons-491859"
	I1101 08:30:03.395224   10731 addons.go:239] Setting addon volumesnapshots=true in "addons-491859"
	I1101 08:30:03.395250   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.392917   10731 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-491859"
	I1101 08:30:03.396651   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.397196   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.397600   10731 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-491859"
	I1101 08:30:03.397618   10731 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-491859"
	I1101 08:30:03.397650   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.398137   10731 addons.go:70] Setting ingress=true in profile "addons-491859"
	I1101 08:30:03.398200   10731 addons.go:239] Setting addon ingress=true in "addons-491859"
	I1101 08:30:03.398272   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.398958   10731 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-491859"
	I1101 08:30:03.398993   10731 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-491859"
	I1101 08:30:03.399360   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.399985   10731 addons.go:70] Setting ingress-dns=true in profile "addons-491859"
	I1101 08:30:03.400057   10731 addons.go:239] Setting addon ingress-dns=true in "addons-491859"
	I1101 08:30:03.400100   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.400435   10731 out.go:179] * Verifying Kubernetes components...
	I1101 08:30:03.401503   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.400500   10731 addons.go:70] Setting storage-provisioner=true in profile "addons-491859"
	I1101 08:30:03.401684   10731 addons.go:239] Setting addon storage-provisioner=true in "addons-491859"
	I1101 08:30:03.401716   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.402151   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.400522   10731 addons.go:70] Setting registry-creds=true in profile "addons-491859"
	I1101 08:30:03.402347   10731 addons.go:239] Setting addon registry-creds=true in "addons-491859"
	I1101 08:30:03.402391   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.392186   10731 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-491859"
	I1101 08:30:03.402484   10731 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-491859"
	I1101 08:30:03.402503   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.403980   10731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:30:03.406655   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.407485   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.407543   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.408313   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.409111   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.439792   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.451368   10731 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 08:30:03.452570   10731 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 08:30:03.452596   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 08:30:03.452667   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.469293   10731 addons.go:239] Setting addon default-storageclass=true in "addons-491859"
	I1101 08:30:03.469425   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.469784   10731 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 08:30:03.470722   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.471012   10731 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 08:30:03.471090   10731 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 08:30:03.471208   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	W1101 08:30:03.488274   10731 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 08:30:03.495847   10731 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 08:30:03.497296   10731 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:30:03.497321   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 08:30:03.497326   10731 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 08:30:03.497393   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.498605   10731 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 08:30:03.498891   10731 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 08:30:03.499023   10731 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 08:30:03.499900   10731 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 08:30:03.499915   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 08:30:03.500008   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.500578   10731 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 08:30:03.500592   10731 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 08:30:03.500675   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.501025   10731 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:30:03.501060   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 08:30:03.501115   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.507713   10731 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-491859"
	I1101 08:30:03.507764   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:03.508291   10731 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 08:30:03.508562   10731 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 08:30:03.509542   10731 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 08:30:03.509638   10731 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:30:03.509649   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 08:30:03.509705   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.510112   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:03.521910   10731 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 08:30:03.525239   10731 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 08:30:03.525270   10731 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 08:30:03.526481   10731 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 08:30:03.526541   10731 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:30:03.527910   10731 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 08:30:03.529554   10731 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 08:30:03.529688   10731 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:30:03.529816   10731 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:30:03.529830   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 08:30:03.529918   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.531367   10731 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:30:03.531385   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 08:30:03.531438   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.531850   10731 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 08:30:03.532990   10731 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 08:30:03.534068   10731 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 08:30:03.534087   10731 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 08:30:03.534148   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.546403   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.547268   10731 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 08:30:03.547277   10731 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 08:30:03.548467   10731 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 08:30:03.548515   10731 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 08:30:03.548584   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.550060   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.551275   10731 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 08:30:03.551294   10731 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 08:30:03.551371   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.552134   10731 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 08:30:03.552150   10731 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 08:30:03.552208   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.558147   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.562741   10731 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 08:30:03.570563   10731 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:30:03.570585   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 08:30:03.570651   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.571001   10731 out.go:179]   - Using image docker.io/busybox:stable
	I1101 08:30:03.572980   10731 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 08:30:03.574420   10731 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:30:03.574438   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 08:30:03.574504   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:03.594336   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.597748   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.599494   10731 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 08:30:03.599137   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.603413   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.606503   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.606963   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.617302   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.620094   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.626000   10731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:30:03.626653   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.633939   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.640221   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:03.649782   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	W1101 08:30:03.649815   10731 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 08:30:03.649874   10731 retry.go:31] will retry after 372.854236ms: ssh: handshake failed: EOF
	I1101 08:30:03.732765   10731 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 08:30:03.732790   10731 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 08:30:03.744011   10731 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:03.744051   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 08:30:03.750706   10731 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 08:30:03.750729   10731 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 08:30:03.750911   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:30:03.772429   10731 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 08:30:03.772460   10731 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 08:30:03.776339   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:03.784021   10731 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 08:30:03.784051   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 08:30:03.788051   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:30:03.789803   10731 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 08:30:03.789838   10731 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 08:30:03.792023   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:30:03.793099   10731 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 08:30:03.793122   10731 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 08:30:03.794618   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 08:30:03.811703   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:30:03.811842   10731 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 08:30:03.811855   10731 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 08:30:03.812454   10731 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 08:30:03.812508   10731 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 08:30:03.818716   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:30:03.825949   10731 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:30:03.825975   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 08:30:03.829212   10731 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 08:30:03.829242   10731 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 08:30:03.834190   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:30:03.837859   10731 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:30:03.837904   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 08:30:03.842320   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 08:30:03.860421   10731 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 08:30:03.860520   10731 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 08:30:03.863770   10731 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 08:30:03.863792   10731 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 08:30:03.880009   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:30:03.885813   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:30:03.892961   10731 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 08:30:03.893059   10731 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 08:30:03.896066   10731 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:30:03.896094   10731 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 08:30:03.917359   10731 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 08:30:03.917393   10731 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 08:30:03.936348   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:30:03.954489   10731 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 08:30:03.954536   10731 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 08:30:03.975980   10731 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:30:03.976005   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 08:30:03.978702   10731 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 08:30:03.979819   10731 node_ready.go:35] waiting up to 6m0s for node "addons-491859" to be "Ready" ...
	I1101 08:30:04.034056   10731 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 08:30:04.034128   10731 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 08:30:04.044135   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:30:04.114761   10731 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 08:30:04.114791   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 08:30:04.213240   10731 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 08:30:04.213272   10731 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 08:30:04.262549   10731 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 08:30:04.262582   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 08:30:04.306479   10731 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 08:30:04.306527   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 08:30:04.345132   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:30:04.369379   10731 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:30:04.369413   10731 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 08:30:04.425385   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:30:04.484618   10731 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-491859" context rescaled to 1 replicas
	I1101 08:30:04.794803   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.018416231s)
	W1101 08:30:04.794938   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:04.795028   10731 retry.go:31] will retry after 284.875547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:04.796727   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.008636924s)
	I1101 08:30:04.797163   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.005108247s)
	I1101 08:30:04.797213   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.002575723s)
	I1101 08:30:05.023052   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.188818769s)
	I1101 08:30:05.023105   10731 addons.go:480] Verifying addon ingress=true in "addons-491859"
	I1101 08:30:05.023233   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.143190692s)
	I1101 08:30:05.023171   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.180804805s)
	I1101 08:30:05.023328   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.13722538s)
	I1101 08:30:05.023348   10731 addons.go:480] Verifying addon registry=true in "addons-491859"
	I1101 08:30:05.023463   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.087084333s)
	I1101 08:30:05.024408   10731 addons.go:480] Verifying addon metrics-server=true in "addons-491859"
	I1101 08:30:05.024795   10731 out.go:179] * Verifying ingress addon...
	I1101 08:30:05.024794   10731 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-491859 service yakd-dashboard -n yakd-dashboard
	
	I1101 08:30:05.025511   10731 out.go:179] * Verifying registry addon...
	I1101 08:30:05.027123   10731 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 08:30:05.028492   10731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 08:30:05.030974   10731 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 08:30:05.030997   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:05.031105   10731 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 08:30:05.031124   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:05.080979   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:05.476586   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.432394376s)
	W1101 08:30:05.476629   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:30:05.476652   10731 retry.go:31] will retry after 362.914869ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:30:05.476691   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.131517968s)
	I1101 08:30:05.476891   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.051450818s)
	I1101 08:30:05.476916   10731 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-491859"
	I1101 08:30:05.478896   10731 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 08:30:05.480997   10731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 08:30:05.483732   10731 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 08:30:05.483754   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:05.584950   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:05.585097   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:30:05.735660   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:05.735689   10731 retry.go:31] will retry after 386.234411ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:05.840005   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1101 08:30:05.983211   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:05.984141   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:06.030027   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:06.031630   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:06.122367   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:06.484096   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:06.530216   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:06.531677   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:06.984119   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:07.031030   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:07.031422   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:07.484172   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:07.530028   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:07.531675   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:07.984217   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:08.029911   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:08.031595   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:08.313033   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.472980617s)
	I1101 08:30:08.313133   10731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.190732222s)
	W1101 08:30:08.313160   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:08.313180   10731 retry.go:31] will retry after 498.995051ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 08:30:08.482791   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:08.483935   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:08.530611   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:08.531226   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:08.813003   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:08.983701   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:09.030593   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:09.030946   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:30:09.353024   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:09.353060   10731 retry.go:31] will retry after 1.048520412s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:09.484232   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:09.531211   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:09.531277   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:09.983610   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:10.030529   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:10.030836   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:10.402391   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 08:30:10.483335   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:10.484402   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:10.530217   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:10.530760   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:30:10.942923   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:10.942956   10731 retry.go:31] will retry after 682.933229ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:10.983672   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:11.030486   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:11.031040   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:11.052949   10731 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 08:30:11.053016   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:11.071657   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:11.182470   10731 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 08:30:11.196199   10731 addons.go:239] Setting addon gcp-auth=true in "addons-491859"
	I1101 08:30:11.196254   10731 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:30:11.196631   10731 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:30:11.214892   10731 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 08:30:11.214949   10731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:30:11.233617   10731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:30:11.333840   10731 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 08:30:11.335074   10731 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:30:11.335999   10731 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 08:30:11.336024   10731 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 08:30:11.349642   10731 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 08:30:11.349664   10731 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 08:30:11.362726   10731 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:30:11.362745   10731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 08:30:11.376508   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:30:11.483416   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:11.530165   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:11.531617   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:11.626669   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:11.694019   10731 addons.go:480] Verifying addon gcp-auth=true in "addons-491859"
	I1101 08:30:11.695542   10731 out.go:179] * Verifying gcp-auth addon...
	I1101 08:30:11.697650   10731 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 08:30:11.700461   10731 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 08:30:11.700483   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:11.983776   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:12.030580   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:12.031052   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:30:12.191761   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:12.191791   10731 retry.go:31] will retry after 2.830148725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:12.200093   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:12.484090   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:12.530958   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:12.531292   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:12.701182   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:12.983164   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:12.984214   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:13.029697   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:13.031328   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:13.200898   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:13.483891   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:13.530735   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:13.530844   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:13.701115   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:13.983835   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:14.030909   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:14.031033   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:14.200833   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:14.483727   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:14.530352   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:14.530900   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:14.700386   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:14.983326   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:14.983972   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:15.023119   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:15.030268   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:15.031042   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:15.201200   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:15.484013   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:15.530091   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:15.530781   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:30:15.560653   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:15.560697   10731 retry.go:31] will retry after 3.900593045s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:15.700450   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:15.983281   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:16.030074   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:16.030561   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:16.200354   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:16.484199   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:16.529992   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:16.531680   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:16.701129   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:16.983685   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:17.030570   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:17.030741   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:17.200322   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:17.483257   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:17.483974   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:17.530732   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:17.531286   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:17.701273   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:17.984294   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:18.029790   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:18.031556   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:18.201466   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:18.483645   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:18.530232   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:18.530991   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:18.700816   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:18.983265   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:19.029817   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:19.031700   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:19.200153   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:19.461470   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:19.483636   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:19.530669   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:19.530979   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:19.700493   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:19.983015   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:19.983726   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:30:20.013982   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:20.014012   10731 retry.go:31] will retry after 2.317231137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:20.030969   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:20.031601   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:20.200193   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:20.484381   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:20.530206   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:20.531821   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:20.700457   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:20.983695   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:21.030375   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:21.030963   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:21.200636   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:21.483893   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:21.530527   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:21.531096   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:21.700823   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:21.983609   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:22.030592   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:22.030897   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:22.200744   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:22.331993   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 08:30:22.483098   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:22.484570   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:22.530469   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:22.531360   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:22.700244   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:22.857684   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:22.857717   10731 retry.go:31] will retry after 8.632870588s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:22.983497   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:23.030363   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:23.030815   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:23.200584   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:23.483626   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:23.530779   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:23.530816   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:23.700646   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:23.983181   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:24.029639   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:24.031328   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:24.200831   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:24.483634   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:24.530201   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:24.530884   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:24.700364   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:24.983519   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:24.984187   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:25.029687   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:25.031784   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:25.200239   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:25.483544   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:25.530104   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:25.530880   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:25.700319   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:25.984276   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:26.029808   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:26.031669   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:26.200152   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:26.484076   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:26.530979   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:26.531360   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:26.700931   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:26.983554   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:27.030415   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:27.030599   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:27.201042   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:27.483165   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:27.483776   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:27.530494   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:27.531077   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:27.700441   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:27.983287   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:28.030332   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:28.031503   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:28.201174   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:28.484264   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:28.529907   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:28.531601   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:28.701075   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:28.983567   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:29.030286   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:29.030635   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:29.200335   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:29.484022   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:29.530974   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:29.531203   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:29.700718   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:29.982342   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:29.983261   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:30.030005   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:30.031455   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:30.201147   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:30.483736   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:30.530542   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:30.531006   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:30.700342   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:30.983973   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:31.030709   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:31.031228   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:31.201218   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:31.483666   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:31.490784   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:31.530772   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:31.531515   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:31.701314   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:31.983234   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:30:32.022305   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:32.022336   10731 retry.go:31] will retry after 9.63990457s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:32.030013   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:32.031473   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:32.201003   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:32.482745   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:32.483469   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:32.530122   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:32.530795   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:32.700386   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:32.983857   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:33.030445   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:33.030962   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:33.200556   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:33.483768   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:33.530776   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:33.531010   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:33.700573   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:33.983287   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:34.030136   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:34.030648   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:34.200283   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:34.482990   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:34.483946   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:34.530899   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:34.531410   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:34.701010   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:34.983569   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:35.030238   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:35.030676   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:35.200150   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:35.483750   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:35.530244   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:35.530897   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:35.700731   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:35.983110   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:36.030968   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:36.031158   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:36.201114   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:36.483175   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:36.484298   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:36.529892   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:36.531505   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:36.700995   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:36.983486   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:37.029986   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:37.030806   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:37.200364   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:37.483746   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:37.530514   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:37.531099   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:37.700743   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:37.983982   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:38.030619   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:38.031026   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:38.201043   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:38.483277   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:38.485249   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:38.529744   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:38.531168   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:38.701485   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:38.983328   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:39.029910   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:39.030580   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:39.200178   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:39.483623   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:39.530268   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:39.530856   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:39.700788   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:39.983215   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:40.029775   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:40.031309   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:40.201011   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:40.483775   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:40.530455   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:40.530994   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:40.700448   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:40.983153   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:40.983243   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:41.029843   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:41.031325   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:41.200632   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:41.483779   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:41.530252   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:41.531050   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:41.663367   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:41.700784   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:41.983610   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:42.030326   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:42.030923   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:42.200567   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:42.217133   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:42.217160   10731 retry.go:31] will retry after 18.203457347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:42.483748   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:42.530591   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:42.530958   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:42.700548   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:42.983348   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:30:42.983404   10731 node_ready.go:57] node "addons-491859" has "Ready":"False" status (will retry)
	I1101 08:30:43.030076   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:43.030795   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:43.200333   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:43.483500   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:43.530119   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:43.530675   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:43.700193   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:43.983842   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:44.030479   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:44.030950   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:44.200632   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:44.482486   10731 node_ready.go:49] node "addons-491859" is "Ready"
	I1101 08:30:44.482525   10731 node_ready.go:38] duration metric: took 40.502673113s for node "addons-491859" to be "Ready" ...
	I1101 08:30:44.482554   10731 api_server.go:52] waiting for apiserver process to appear ...
	I1101 08:30:44.482615   10731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:30:44.483452   10731 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 08:30:44.483474   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:44.496904   10731 api_server.go:72] duration metric: took 41.104839041s to wait for apiserver process to appear ...
	I1101 08:30:44.496932   10731 api_server.go:88] waiting for apiserver healthz status ...
	I1101 08:30:44.496952   10731 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 08:30:44.501946   10731 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 08:30:44.502965   10731 api_server.go:141] control plane version: v1.34.1
	I1101 08:30:44.502991   10731 api_server.go:131] duration metric: took 6.052489ms to wait for apiserver health ...
	I1101 08:30:44.503000   10731 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 08:30:44.508541   10731 system_pods.go:59] 20 kube-system pods found
	I1101 08:30:44.508586   10731 system_pods.go:61] "amd-gpu-device-plugin-6twrx" [6d120f25-a6a5-48f2-8849-25607b2e8338] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:30:44.508598   10731 system_pods.go:61] "coredns-66bc5c9577-wp7lb" [eae56377-036f-4eef-89a7-5d685f77fdeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:30:44.508608   10731 system_pods.go:61] "csi-hostpath-attacher-0" [b7fd1d03-fc22-4436-8594-4949ae507ffc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:30:44.508631   10731 system_pods.go:61] "csi-hostpath-resizer-0" [944b7053-40ae-4094-b90e-5a1828ef9297] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:30:44.508640   10731 system_pods.go:61] "csi-hostpathplugin-b7wqd" [00647a0a-0c62-4ce2-a788-8db986f1d092] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:30:44.508646   10731 system_pods.go:61] "etcd-addons-491859" [debd6041-229d-4fe9-b7d3-5d939545f1ee] Running
	I1101 08:30:44.508651   10731 system_pods.go:61] "kindnet-7cj4p" [800b9b84-244b-4262-8df7-589eed5b9599] Running
	I1101 08:30:44.508658   10731 system_pods.go:61] "kube-apiserver-addons-491859" [d9b4572e-30e3-4ec1-ac79-bef8aaf6a60a] Running
	I1101 08:30:44.508663   10731 system_pods.go:61] "kube-controller-manager-addons-491859" [40de3202-f335-43e6-9af3-e7c4a5b50b43] Running
	I1101 08:30:44.508674   10731 system_pods.go:61] "kube-ingress-dns-minikube" [0e191110-51bb-4a21-a2cb-363be938390f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:30:44.508684   10731 system_pods.go:61] "kube-proxy-h22tg" [f2f6b41b-c798-4afd-a685-24ba393d78a7] Running
	I1101 08:30:44.508690   10731 system_pods.go:61] "kube-scheduler-addons-491859" [4e097ceb-3a17-432d-8359-7ad7db3c99da] Running
	I1101 08:30:44.508699   10731 system_pods.go:61] "metrics-server-85b7d694d7-8j2pv" [8863ea1b-774d-469e-8487-d29ec16b131c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:30:44.508709   10731 system_pods.go:61] "nvidia-device-plugin-daemonset-hbv5p" [838833dc-5806-4421-822f-e50f71ba642b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:30:44.508722   10731 system_pods.go:61] "registry-6b586f9694-nlmgw" [81e0129d-d199-423b-a493-623cb2695a4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:30:44.508734   10731 system_pods.go:61] "registry-creds-764b6fb674-rj5zk" [5f281acc-558f-462c-bf98-c52c7b8b34a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:30:44.508747   10731 system_pods.go:61] "registry-proxy-jncr6" [6640fd69-5d62-4d2e-acb5-66ff58f82684] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:30:44.508761   10731 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7bhnn" [2243f94e-8cc4-4e41-9e6b-6e83768aa796] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:44.508775   10731 system_pods.go:61] "snapshot-controller-7d9fbc56b8-c9dzh" [502d593c-55d4-440e-b4f3-2a5f5c53bca7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:44.508787   10731 system_pods.go:61] "storage-provisioner" [4b227df0-0df7-4c55-81bb-20a8928f38ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:30:44.508799   10731 system_pods.go:74] duration metric: took 5.79208ms to wait for pod list to return data ...
	I1101 08:30:44.508813   10731 default_sa.go:34] waiting for default service account to be created ...
	I1101 08:30:44.517793   10731 default_sa.go:45] found service account: "default"
	I1101 08:30:44.517825   10731 default_sa.go:55] duration metric: took 9.003622ms for default service account to be created ...
	I1101 08:30:44.517839   10731 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 08:30:44.532083   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:44.532987   10731 system_pods.go:86] 20 kube-system pods found
	I1101 08:30:44.533015   10731 system_pods.go:89] "amd-gpu-device-plugin-6twrx" [6d120f25-a6a5-48f2-8849-25607b2e8338] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:30:44.533026   10731 system_pods.go:89] "coredns-66bc5c9577-wp7lb" [eae56377-036f-4eef-89a7-5d685f77fdeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:30:44.533036   10731 system_pods.go:89] "csi-hostpath-attacher-0" [b7fd1d03-fc22-4436-8594-4949ae507ffc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:30:44.533045   10731 system_pods.go:89] "csi-hostpath-resizer-0" [944b7053-40ae-4094-b90e-5a1828ef9297] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:30:44.533054   10731 system_pods.go:89] "csi-hostpathplugin-b7wqd" [00647a0a-0c62-4ce2-a788-8db986f1d092] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:30:44.533060   10731 system_pods.go:89] "etcd-addons-491859" [debd6041-229d-4fe9-b7d3-5d939545f1ee] Running
	I1101 08:30:44.533069   10731 system_pods.go:89] "kindnet-7cj4p" [800b9b84-244b-4262-8df7-589eed5b9599] Running
	I1101 08:30:44.533115   10731 system_pods.go:89] "kube-apiserver-addons-491859" [d9b4572e-30e3-4ec1-ac79-bef8aaf6a60a] Running
	I1101 08:30:44.533127   10731 system_pods.go:89] "kube-controller-manager-addons-491859" [40de3202-f335-43e6-9af3-e7c4a5b50b43] Running
	I1101 08:30:44.533137   10731 system_pods.go:89] "kube-ingress-dns-minikube" [0e191110-51bb-4a21-a2cb-363be938390f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:30:44.533143   10731 system_pods.go:89] "kube-proxy-h22tg" [f2f6b41b-c798-4afd-a685-24ba393d78a7] Running
	I1101 08:30:44.533148   10731 system_pods.go:89] "kube-scheduler-addons-491859" [4e097ceb-3a17-432d-8359-7ad7db3c99da] Running
	I1101 08:30:44.533156   10731 system_pods.go:89] "metrics-server-85b7d694d7-8j2pv" [8863ea1b-774d-469e-8487-d29ec16b131c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:30:44.533165   10731 system_pods.go:89] "nvidia-device-plugin-daemonset-hbv5p" [838833dc-5806-4421-822f-e50f71ba642b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:30:44.533177   10731 system_pods.go:89] "registry-6b586f9694-nlmgw" [81e0129d-d199-423b-a493-623cb2695a4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:30:44.533186   10731 system_pods.go:89] "registry-creds-764b6fb674-rj5zk" [5f281acc-558f-462c-bf98-c52c7b8b34a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:30:44.533111   10731 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 08:30:44.533212   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:44.533197   10731 system_pods.go:89] "registry-proxy-jncr6" [6640fd69-5d62-4d2e-acb5-66ff58f82684] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:30:44.533249   10731 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7bhnn" [2243f94e-8cc4-4e41-9e6b-6e83768aa796] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:44.533262   10731 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c9dzh" [502d593c-55d4-440e-b4f3-2a5f5c53bca7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:44.533282   10731 system_pods.go:89] "storage-provisioner" [4b227df0-0df7-4c55-81bb-20a8928f38ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:30:44.533300   10731 retry.go:31] will retry after 223.645933ms: missing components: kube-dns
	I1101 08:30:44.701221   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:44.803418   10731 system_pods.go:86] 20 kube-system pods found
	I1101 08:30:44.803454   10731 system_pods.go:89] "amd-gpu-device-plugin-6twrx" [6d120f25-a6a5-48f2-8849-25607b2e8338] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:30:44.803462   10731 system_pods.go:89] "coredns-66bc5c9577-wp7lb" [eae56377-036f-4eef-89a7-5d685f77fdeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:30:44.803468   10731 system_pods.go:89] "csi-hostpath-attacher-0" [b7fd1d03-fc22-4436-8594-4949ae507ffc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:30:44.803473   10731 system_pods.go:89] "csi-hostpath-resizer-0" [944b7053-40ae-4094-b90e-5a1828ef9297] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:30:44.803482   10731 system_pods.go:89] "csi-hostpathplugin-b7wqd" [00647a0a-0c62-4ce2-a788-8db986f1d092] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:30:44.803485   10731 system_pods.go:89] "etcd-addons-491859" [debd6041-229d-4fe9-b7d3-5d939545f1ee] Running
	I1101 08:30:44.803490   10731 system_pods.go:89] "kindnet-7cj4p" [800b9b84-244b-4262-8df7-589eed5b9599] Running
	I1101 08:30:44.803494   10731 system_pods.go:89] "kube-apiserver-addons-491859" [d9b4572e-30e3-4ec1-ac79-bef8aaf6a60a] Running
	I1101 08:30:44.803497   10731 system_pods.go:89] "kube-controller-manager-addons-491859" [40de3202-f335-43e6-9af3-e7c4a5b50b43] Running
	I1101 08:30:44.803503   10731 system_pods.go:89] "kube-ingress-dns-minikube" [0e191110-51bb-4a21-a2cb-363be938390f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:30:44.803516   10731 system_pods.go:89] "kube-proxy-h22tg" [f2f6b41b-c798-4afd-a685-24ba393d78a7] Running
	I1101 08:30:44.803520   10731 system_pods.go:89] "kube-scheduler-addons-491859" [4e097ceb-3a17-432d-8359-7ad7db3c99da] Running
	I1101 08:30:44.803524   10731 system_pods.go:89] "metrics-server-85b7d694d7-8j2pv" [8863ea1b-774d-469e-8487-d29ec16b131c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:30:44.803537   10731 system_pods.go:89] "nvidia-device-plugin-daemonset-hbv5p" [838833dc-5806-4421-822f-e50f71ba642b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:30:44.803544   10731 system_pods.go:89] "registry-6b586f9694-nlmgw" [81e0129d-d199-423b-a493-623cb2695a4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:30:44.803551   10731 system_pods.go:89] "registry-creds-764b6fb674-rj5zk" [5f281acc-558f-462c-bf98-c52c7b8b34a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:30:44.803557   10731 system_pods.go:89] "registry-proxy-jncr6" [6640fd69-5d62-4d2e-acb5-66ff58f82684] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:30:44.803562   10731 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7bhnn" [2243f94e-8cc4-4e41-9e6b-6e83768aa796] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:44.803570   10731 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c9dzh" [502d593c-55d4-440e-b4f3-2a5f5c53bca7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:44.803574   10731 system_pods.go:89] "storage-provisioner" [4b227df0-0df7-4c55-81bb-20a8928f38ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:30:44.803587   10731 retry.go:31] will retry after 322.669522ms: missing components: kube-dns
	I1101 08:30:44.986702   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:45.086307   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:45.086430   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:45.131081   10731 system_pods.go:86] 20 kube-system pods found
	I1101 08:30:45.131122   10731 system_pods.go:89] "amd-gpu-device-plugin-6twrx" [6d120f25-a6a5-48f2-8849-25607b2e8338] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:30:45.131134   10731 system_pods.go:89] "coredns-66bc5c9577-wp7lb" [eae56377-036f-4eef-89a7-5d685f77fdeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:30:45.131144   10731 system_pods.go:89] "csi-hostpath-attacher-0" [b7fd1d03-fc22-4436-8594-4949ae507ffc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:30:45.131157   10731 system_pods.go:89] "csi-hostpath-resizer-0" [944b7053-40ae-4094-b90e-5a1828ef9297] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:30:45.131168   10731 system_pods.go:89] "csi-hostpathplugin-b7wqd" [00647a0a-0c62-4ce2-a788-8db986f1d092] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:30:45.131178   10731 system_pods.go:89] "etcd-addons-491859" [debd6041-229d-4fe9-b7d3-5d939545f1ee] Running
	I1101 08:30:45.131189   10731 system_pods.go:89] "kindnet-7cj4p" [800b9b84-244b-4262-8df7-589eed5b9599] Running
	I1101 08:30:45.131198   10731 system_pods.go:89] "kube-apiserver-addons-491859" [d9b4572e-30e3-4ec1-ac79-bef8aaf6a60a] Running
	I1101 08:30:45.131204   10731 system_pods.go:89] "kube-controller-manager-addons-491859" [40de3202-f335-43e6-9af3-e7c4a5b50b43] Running
	I1101 08:30:45.131217   10731 system_pods.go:89] "kube-ingress-dns-minikube" [0e191110-51bb-4a21-a2cb-363be938390f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:30:45.131225   10731 system_pods.go:89] "kube-proxy-h22tg" [f2f6b41b-c798-4afd-a685-24ba393d78a7] Running
	I1101 08:30:45.131233   10731 system_pods.go:89] "kube-scheduler-addons-491859" [4e097ceb-3a17-432d-8359-7ad7db3c99da] Running
	I1101 08:30:45.131244   10731 system_pods.go:89] "metrics-server-85b7d694d7-8j2pv" [8863ea1b-774d-469e-8487-d29ec16b131c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:30:45.131252   10731 system_pods.go:89] "nvidia-device-plugin-daemonset-hbv5p" [838833dc-5806-4421-822f-e50f71ba642b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:30:45.131263   10731 system_pods.go:89] "registry-6b586f9694-nlmgw" [81e0129d-d199-423b-a493-623cb2695a4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:30:45.131272   10731 system_pods.go:89] "registry-creds-764b6fb674-rj5zk" [5f281acc-558f-462c-bf98-c52c7b8b34a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:30:45.131299   10731 system_pods.go:89] "registry-proxy-jncr6" [6640fd69-5d62-4d2e-acb5-66ff58f82684] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:30:45.131310   10731 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7bhnn" [2243f94e-8cc4-4e41-9e6b-6e83768aa796] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:45.131322   10731 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c9dzh" [502d593c-55d4-440e-b4f3-2a5f5c53bca7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:45.131334   10731 system_pods.go:89] "storage-provisioner" [4b227df0-0df7-4c55-81bb-20a8928f38ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:30:45.131354   10731 retry.go:31] will retry after 465.248265ms: missing components: kube-dns
	I1101 08:30:45.200911   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:45.485498   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:45.530399   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:45.531769   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:45.601286   10731 system_pods.go:86] 20 kube-system pods found
	I1101 08:30:45.601320   10731 system_pods.go:89] "amd-gpu-device-plugin-6twrx" [6d120f25-a6a5-48f2-8849-25607b2e8338] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:30:45.601327   10731 system_pods.go:89] "coredns-66bc5c9577-wp7lb" [eae56377-036f-4eef-89a7-5d685f77fdeb] Running
	I1101 08:30:45.601339   10731 system_pods.go:89] "csi-hostpath-attacher-0" [b7fd1d03-fc22-4436-8594-4949ae507ffc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:30:45.601346   10731 system_pods.go:89] "csi-hostpath-resizer-0" [944b7053-40ae-4094-b90e-5a1828ef9297] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:30:45.601355   10731 system_pods.go:89] "csi-hostpathplugin-b7wqd" [00647a0a-0c62-4ce2-a788-8db986f1d092] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:30:45.601363   10731 system_pods.go:89] "etcd-addons-491859" [debd6041-229d-4fe9-b7d3-5d939545f1ee] Running
	I1101 08:30:45.601371   10731 system_pods.go:89] "kindnet-7cj4p" [800b9b84-244b-4262-8df7-589eed5b9599] Running
	I1101 08:30:45.601379   10731 system_pods.go:89] "kube-apiserver-addons-491859" [d9b4572e-30e3-4ec1-ac79-bef8aaf6a60a] Running
	I1101 08:30:45.601385   10731 system_pods.go:89] "kube-controller-manager-addons-491859" [40de3202-f335-43e6-9af3-e7c4a5b50b43] Running
	I1101 08:30:45.601398   10731 system_pods.go:89] "kube-ingress-dns-minikube" [0e191110-51bb-4a21-a2cb-363be938390f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:30:45.601403   10731 system_pods.go:89] "kube-proxy-h22tg" [f2f6b41b-c798-4afd-a685-24ba393d78a7] Running
	I1101 08:30:45.601410   10731 system_pods.go:89] "kube-scheduler-addons-491859" [4e097ceb-3a17-432d-8359-7ad7db3c99da] Running
	I1101 08:30:45.601418   10731 system_pods.go:89] "metrics-server-85b7d694d7-8j2pv" [8863ea1b-774d-469e-8487-d29ec16b131c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:30:45.601427   10731 system_pods.go:89] "nvidia-device-plugin-daemonset-hbv5p" [838833dc-5806-4421-822f-e50f71ba642b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:30:45.601438   10731 system_pods.go:89] "registry-6b586f9694-nlmgw" [81e0129d-d199-423b-a493-623cb2695a4f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:30:45.601455   10731 system_pods.go:89] "registry-creds-764b6fb674-rj5zk" [5f281acc-558f-462c-bf98-c52c7b8b34a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:30:45.601463   10731 system_pods.go:89] "registry-proxy-jncr6" [6640fd69-5d62-4d2e-acb5-66ff58f82684] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:30:45.601471   10731 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7bhnn" [2243f94e-8cc4-4e41-9e6b-6e83768aa796] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:45.601483   10731 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c9dzh" [502d593c-55d4-440e-b4f3-2a5f5c53bca7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:30:45.601490   10731 system_pods.go:89] "storage-provisioner" [4b227df0-0df7-4c55-81bb-20a8928f38ea] Running
	I1101 08:30:45.601502   10731 system_pods.go:126] duration metric: took 1.08365464s to wait for k8s-apps to be running ...
	I1101 08:30:45.601516   10731 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 08:30:45.601567   10731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:30:45.619182   10731 system_svc.go:56] duration metric: took 17.656949ms WaitForService to wait for kubelet
	I1101 08:30:45.619219   10731 kubeadm.go:587] duration metric: took 42.227159063s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:30:45.619242   10731 node_conditions.go:102] verifying NodePressure condition ...
	I1101 08:30:45.622145   10731 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 08:30:45.622177   10731 node_conditions.go:123] node cpu capacity is 8
	I1101 08:30:45.622195   10731 node_conditions.go:105] duration metric: took 2.946946ms to run NodePressure ...
	I1101 08:30:45.622209   10731 start.go:242] waiting for startup goroutines ...
	I1101 08:30:45.701355   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:45.984897   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:46.085957   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:46.086015   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:46.200525   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:46.487062   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:46.531249   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:46.532021   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:46.700858   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:46.985526   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:47.033084   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:47.033168   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:47.201822   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:47.484982   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:47.531250   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:47.531712   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:47.701791   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:47.984654   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:48.030405   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:48.030690   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:48.201216   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:48.484723   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:48.530668   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:48.530960   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:48.701046   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:48.985379   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:49.031363   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:49.031510   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:49.201495   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:49.484817   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:49.530909   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:49.531747   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:49.700843   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:49.985574   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:50.030685   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:50.032087   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:50.200993   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:50.484288   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:50.585490   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:50.585510   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:50.701103   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:50.984591   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:51.030583   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:51.030848   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:51.200782   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:51.484898   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:51.530906   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:51.531544   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:51.701904   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:51.984598   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:52.030330   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:52.030956   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:52.200859   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:52.487565   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:52.530634   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:52.531795   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:52.702295   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:52.984680   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:53.030933   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:53.031306   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:53.201746   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:53.484232   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:53.531112   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:53.531487   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:53.701411   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:53.984694   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:54.030389   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:54.030927   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:54.200550   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:54.484797   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:54.530498   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:54.531066   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:54.701467   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:54.984652   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:55.030336   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:55.030945   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:55.200786   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:55.485647   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:55.530498   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:55.530996   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:55.703499   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:55.985040   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:56.031373   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:56.031671   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:56.200587   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:56.484621   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:56.530467   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:56.531015   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:56.700681   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:56.985471   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:57.030667   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:57.031595   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:57.201591   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:57.484926   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:57.531537   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:57.531537   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:57.701365   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:57.985014   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:58.030821   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:58.031053   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:58.201010   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:58.649571   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:58.649614   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:58.649643   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:58.701225   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:58.984571   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:59.030360   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:59.030618   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:59.200652   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:59.485380   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:59.531430   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:59.531639   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:59.701341   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:59.984955   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:00.030686   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:00.085633   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:00.201480   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:00.421798   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:31:00.485235   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:00.531215   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:00.531305   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:00.701174   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:00.984684   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:01.031200   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:01.031841   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:31:01.135289   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:01.135328   10731 retry.go:31] will retry after 17.624454735s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:01.201386   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:01.484618   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:01.585394   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:01.585472   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:01.701089   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:01.984558   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:02.030468   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:02.030774   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:02.200774   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:02.484158   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:02.585371   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:02.585505   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:02.701416   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:02.985014   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:03.031157   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:03.031529   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:03.201061   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:03.486284   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:03.530807   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:03.531555   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:03.701660   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:03.985125   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:04.030905   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:04.031451   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:04.201050   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:04.486933   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:04.693911   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:04.694261   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:04.863607   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:04.985566   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:05.030232   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:05.031734   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:05.200393   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:05.488054   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:05.532770   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:05.533594   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:05.700315   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:05.984749   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:06.030486   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:06.030965   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:06.200653   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:06.485309   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:06.531155   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:06.531435   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:06.701566   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:06.986309   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:07.087965   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:07.088108   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:07.200623   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:07.485150   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:07.531705   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:07.531739   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:07.700639   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:07.984849   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:08.030593   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:08.031248   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:08.200555   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:08.486121   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:08.530921   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:08.531459   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:08.701220   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:08.985054   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:09.030930   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:09.031696   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:31:09.200419   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:09.484693   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:09.585149   10731 kapi.go:107] duration metric: took 1m4.556655418s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 08:31:09.585348   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:09.700900   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:09.987325   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:10.055482   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:10.201183   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:10.485089   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:10.585964   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:10.700899   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:10.985436   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:11.031477   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:11.201310   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:11.484560   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:11.530474   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:11.700579   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:11.985031   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:12.030798   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:12.201147   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:12.484701   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:12.530973   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:12.700765   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:12.985498   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:13.030262   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:13.200936   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:13.484768   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:13.530311   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:13.701165   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:13.984632   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:14.030196   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:14.200684   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:14.485495   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:14.530750   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:14.703955   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:14.984242   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:15.085196   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:15.200914   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:15.484252   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:15.531066   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:15.700906   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:15.984293   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:16.029990   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:16.200480   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:16.485022   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:16.530583   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:16.701357   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:16.988058   10731 kapi.go:107] duration metric: took 1m11.507059289s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 08:31:17.030885   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:17.200572   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:17.530476   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:17.700931   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:18.030926   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:18.200532   10731 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:18.533141   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:18.704339   10731 kapi.go:107] duration metric: took 1m7.006683576s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 08:31:18.706036   10731 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-491859 cluster.
	I1101 08:31:18.707341   10731 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 08:31:18.708535   10731 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 08:31:18.760417   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:31:19.032099   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:19.531387   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:31:19.575975   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:19.576009   10731 retry.go:31] will retry after 21.105929344s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:20.030963   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:20.530362   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:21.030627   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:21.530891   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:22.031208   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:22.530300   10731 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:23.030541   10731 kapi.go:107] duration metric: took 1m18.003419101s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 08:31:40.683989   10731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 08:31:41.220141   10731 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 08:31:41.220264   10731 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 08:31:41.222065   10731 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, amd-gpu-device-plugin, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1101 08:31:41.223581   10731 addons.go:515] duration metric: took 1m37.831522321s for enable addons: enabled=[registry-creds nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner storage-provisioner-rancher metrics-server yakd default-storageclass amd-gpu-device-plugin volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1101 08:31:41.223641   10731 start.go:247] waiting for cluster config update ...
	I1101 08:31:41.223670   10731 start.go:256] writing updated cluster config ...
	I1101 08:31:41.224041   10731 ssh_runner.go:195] Run: rm -f paused
	I1101 08:31:41.228161   10731 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:31:41.231692   10731 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wp7lb" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:41.236100   10731 pod_ready.go:94] pod "coredns-66bc5c9577-wp7lb" is "Ready"
	I1101 08:31:41.236129   10731 pod_ready.go:86] duration metric: took 4.407953ms for pod "coredns-66bc5c9577-wp7lb" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:41.238151   10731 pod_ready.go:83] waiting for pod "etcd-addons-491859" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:41.242371   10731 pod_ready.go:94] pod "etcd-addons-491859" is "Ready"
	I1101 08:31:41.242395   10731 pod_ready.go:86] duration metric: took 4.222388ms for pod "etcd-addons-491859" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:41.244308   10731 pod_ready.go:83] waiting for pod "kube-apiserver-addons-491859" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:41.248202   10731 pod_ready.go:94] pod "kube-apiserver-addons-491859" is "Ready"
	I1101 08:31:41.248227   10731 pod_ready.go:86] duration metric: took 3.893348ms for pod "kube-apiserver-addons-491859" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:41.250307   10731 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-491859" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:41.632126   10731 pod_ready.go:94] pod "kube-controller-manager-addons-491859" is "Ready"
	I1101 08:31:41.632156   10731 pod_ready.go:86] duration metric: took 381.825433ms for pod "kube-controller-manager-addons-491859" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:41.832213   10731 pod_ready.go:83] waiting for pod "kube-proxy-h22tg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:42.232241   10731 pod_ready.go:94] pod "kube-proxy-h22tg" is "Ready"
	I1101 08:31:42.232267   10731 pod_ready.go:86] duration metric: took 400.022337ms for pod "kube-proxy-h22tg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:42.432216   10731 pod_ready.go:83] waiting for pod "kube-scheduler-addons-491859" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:42.831835   10731 pod_ready.go:94] pod "kube-scheduler-addons-491859" is "Ready"
	I1101 08:31:42.831877   10731 pod_ready.go:86] duration metric: took 399.624296ms for pod "kube-scheduler-addons-491859" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:31:42.831888   10731 pod_ready.go:40] duration metric: took 1.603688686s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:31:42.875375   10731 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 08:31:42.878437   10731 out.go:179] * Done! kubectl is now configured to use "addons-491859" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 08:31:43 addons-491859 crio[775]: time="2025-11-01T08:31:43.744058975Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cbcdd758-42e2-42ba-97c7-f48c56c7d49c name=/runtime.v1.ImageService/PullImage
	Nov 01 08:31:43 addons-491859 crio[775]: time="2025-11-01T08:31:43.745506615Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 08:31:44 addons-491859 crio[775]: time="2025-11-01T08:31:44.428526884Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=cbcdd758-42e2-42ba-97c7-f48c56c7d49c name=/runtime.v1.ImageService/PullImage
	Nov 01 08:31:44 addons-491859 crio[775]: time="2025-11-01T08:31:44.429116157Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7b329790-a131-4f06-9b65-1bc1deafca8a name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:31:44 addons-491859 crio[775]: time="2025-11-01T08:31:44.430478653Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=65f1b8d0-33e6-4544-b33f-771456b59251 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:31:44 addons-491859 crio[775]: time="2025-11-01T08:31:44.434239867Z" level=info msg="Creating container: default/busybox/busybox" id=9f55a0db-d1f8-4864-b897-db9de3c63603 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 08:31:44 addons-491859 crio[775]: time="2025-11-01T08:31:44.434382183Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:31:44 addons-491859 crio[775]: time="2025-11-01T08:31:44.439584885Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:31:44 addons-491859 crio[775]: time="2025-11-01T08:31:44.440137394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:31:44 addons-491859 crio[775]: time="2025-11-01T08:31:44.481981645Z" level=info msg="Created container 8ee7f2ba44bc37017ed5c265ee66beb940a0b37448dac7dcb59c94a23b56f615: default/busybox/busybox" id=9f55a0db-d1f8-4864-b897-db9de3c63603 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 08:31:44 addons-491859 crio[775]: time="2025-11-01T08:31:44.482591979Z" level=info msg="Starting container: 8ee7f2ba44bc37017ed5c265ee66beb940a0b37448dac7dcb59c94a23b56f615" id=dc039820-334e-4dff-96cf-a49d0f951b8f name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 08:31:44 addons-491859 crio[775]: time="2025-11-01T08:31:44.484370888Z" level=info msg="Started container" PID=6582 containerID=8ee7f2ba44bc37017ed5c265ee66beb940a0b37448dac7dcb59c94a23b56f615 description=default/busybox/busybox id=dc039820-334e-4dff-96cf-a49d0f951b8f name=/runtime.v1.RuntimeService/StartContainer sandboxID=116db8fc3fe4f486fc1a4c8db723b4dc8bb1c257dd80174edec484fdf90604ce
	Nov 01 08:31:51 addons-491859 crio[775]: time="2025-11-01T08:31:51.539006065Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634/POD" id=7108602f-eff3-4573-94fe-2b507afcd324 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 08:31:51 addons-491859 crio[775]: time="2025-11-01T08:31:51.539120032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:31:51 addons-491859 crio[775]: time="2025-11-01T08:31:51.545495295Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634 Namespace:local-path-storage ID:73cfe3ff9c79a6a766dcf0917fbe0060228fb5abe29b24541676876be550f880 UID:e8e855fe-77b5-4112-8dfe-31ba402b5928 NetNS:/var/run/netns/1bbb5229-0144-4243-9c6d-59e7bca354b4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b038}] Aliases:map[]}"
	Nov 01 08:31:51 addons-491859 crio[775]: time="2025-11-01T08:31:51.545544708Z" level=info msg="Adding pod local-path-storage_helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634 to CNI network \"kindnet\" (type=ptp)"
	Nov 01 08:31:51 addons-491859 crio[775]: time="2025-11-01T08:31:51.55647821Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634 Namespace:local-path-storage ID:73cfe3ff9c79a6a766dcf0917fbe0060228fb5abe29b24541676876be550f880 UID:e8e855fe-77b5-4112-8dfe-31ba402b5928 NetNS:/var/run/netns/1bbb5229-0144-4243-9c6d-59e7bca354b4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b038}] Aliases:map[]}"
	Nov 01 08:31:51 addons-491859 crio[775]: time="2025-11-01T08:31:51.556600531Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634 for CNI network kindnet (type=ptp)"
	Nov 01 08:31:51 addons-491859 crio[775]: time="2025-11-01T08:31:51.557581537Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 08:31:51 addons-491859 crio[775]: time="2025-11-01T08:31:51.558707442Z" level=info msg="Ran pod sandbox 73cfe3ff9c79a6a766dcf0917fbe0060228fb5abe29b24541676876be550f880 with infra container: local-path-storage/helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634/POD" id=7108602f-eff3-4573-94fe-2b507afcd324 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 08:31:51 addons-491859 crio[775]: time="2025-11-01T08:31:51.559796982Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=694136d2-5673-4770-a1cc-41c9fb6b8f20 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:31:51 addons-491859 crio[775]: time="2025-11-01T08:31:51.560035601Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=694136d2-5673-4770-a1cc-41c9fb6b8f20 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:31:51 addons-491859 crio[775]: time="2025-11-01T08:31:51.560077331Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=694136d2-5673-4770-a1cc-41c9fb6b8f20 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:31:51 addons-491859 crio[775]: time="2025-11-01T08:31:51.560647189Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=74758c69-fdd9-4bf8-9019-b8803b28cc64 name=/runtime.v1.ImageService/PullImage
	Nov 01 08:31:51 addons-491859 crio[775]: time="2025-11-01T08:31:51.564679667Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	8ee7f2ba44bc3       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   116db8fc3fe4f       busybox                                     default
	9ffcf3bba5109       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             30 seconds ago       Running             controller                               0                   5e177cfc53e3a       ingress-nginx-controller-675c5ddd98-6nth2   ingress-nginx
	53b633b13927f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 34 seconds ago       Running             gcp-auth                                 0                   ea4e664f598d0       gcp-auth-78565c9fb4-z7tgf                   gcp-auth
	33e4e1fc1e330       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          36 seconds ago       Running             csi-snapshotter                          0                   d3a4a44899566       csi-hostpathplugin-b7wqd                    kube-system
	f04af0fd3a62d       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          37 seconds ago       Running             csi-provisioner                          0                   d3a4a44899566       csi-hostpathplugin-b7wqd                    kube-system
	a1f3a49b7f394       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            38 seconds ago       Running             liveness-probe                           0                   d3a4a44899566       csi-hostpathplugin-b7wqd                    kube-system
	ff4bdf52bbb88       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           39 seconds ago       Running             hostpath                                 0                   d3a4a44899566       csi-hostpathplugin-b7wqd                    kube-system
	a2c23a5170ee9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            39 seconds ago       Running             gadget                                   0                   1ec45885f2df1       gadget-ggvnk                                gadget
	bf1bc8f93c691       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             39 seconds ago       Exited              patch                                    2                   e2f94f12c19b0       gcp-auth-certs-patch-2265b                  gcp-auth
	9d1050c081be9       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                42 seconds ago       Running             node-driver-registrar                    0                   d3a4a44899566       csi-hostpathplugin-b7wqd                    kube-system
	2c81dda5dfe97       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              43 seconds ago       Running             registry-proxy                           0                   4da6c27c4f365       registry-proxy-jncr6                        kube-system
	0f17b27c9fb94       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   44 seconds ago       Running             csi-external-health-monitor-controller   0                   d3a4a44899566       csi-hostpathplugin-b7wqd                    kube-system
	3070142e88965       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     45 seconds ago       Running             nvidia-device-plugin-ctr                 0                   bfd96b8a614be       nvidia-device-plugin-daemonset-hbv5p        kube-system
	e0fe6aa919f9f       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     48 seconds ago       Running             amd-gpu-device-plugin                    0                   eac5d4e55668b       amd-gpu-device-plugin-6twrx                 kube-system
	dd32f839b496a       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              49 seconds ago       Running             csi-resizer                              0                   fe18cc9a7812e       csi-hostpath-resizer-0                      kube-system
	71172491628a1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   50 seconds ago       Exited              create                                   0                   ecd22e4de7fd3       gcp-auth-certs-create-xr2jk                 gcp-auth
	97c62d07dbe74       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   51 seconds ago       Exited              patch                                    0                   0f801687b0a22       ingress-nginx-admission-patch-lsz25         ingress-nginx
	36098b90e218e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   51 seconds ago       Exited              create                                   0                   bb97b090a41f6       ingress-nginx-admission-create-hh4rd        ingress-nginx
	b8dc66998b8c6       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             51 seconds ago       Running             csi-attacher                             0                   500c0d0de486f       csi-hostpath-attacher-0                     kube-system
	c4c4e8392feed       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      52 seconds ago       Running             volume-snapshot-controller               0                   28ad2d10970c5       snapshot-controller-7d9fbc56b8-7bhnn        kube-system
	a4c41d6f050f2       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      52 seconds ago       Running             volume-snapshot-controller               0                   2f05ff87cd027       snapshot-controller-7d9fbc56b8-c9dzh        kube-system
	f5bdbd7214479       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              53 seconds ago       Running             yakd                                     0                   29a5ea326e6b9       yakd-dashboard-5ff678cb9-kmsmc              yakd-dashboard
	64b9c1289b678       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             56 seconds ago       Running             local-path-provisioner                   0                   e62902b579140       local-path-provisioner-648f6765c9-5mm52     local-path-storage
	48f5680426820       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               57 seconds ago       Running             cloud-spanner-emulator                   0                   a077f7d3f0ee4       cloud-spanner-emulator-86bd5cbb97-d2cmm     default
	2b4413f8423a3       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   fb949762c0cdd       registry-6b586f9694-nlmgw                   kube-system
	73d495a359ef0       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   8b8a00c31af6f       kube-ingress-dns-minikube                   kube-system
	18fc9837ab4ea       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   2726e88d1b32a       metrics-server-85b7d694d7-8j2pv             kube-system
	87757a0f68b4c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   83cc6f84d17be       storage-provisioner                         kube-system
	f17c2b6b25fbc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   d8dfe32ac11ec       coredns-66bc5c9577-wp7lb                    kube-system
	c60507f296e95       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   482fcc52b0cd5       kindnet-7cj4p                               kube-system
	4c1ad1a76dfd8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   1caf02966b145       kube-proxy-h22tg                            kube-system
	808e84f4795d8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   c3ed8efa0da25       kube-controller-manager-addons-491859       kube-system
	d4c72eaef4436       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   cc3435e17324d       kube-apiserver-addons-491859                kube-system
	cdda903ada754       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   44ebfd68de35e       kube-scheduler-addons-491859                kube-system
	b29235edc5383       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   1117cb301cd25       etcd-addons-491859                          kube-system
	
	
	==> coredns [f17c2b6b25fbc93674f08548b5309b9715e2cb6d7600ac5d6557505072b3fb3f] <==
	[INFO] 10.244.0.17:52902 - 52419 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003428246s
	[INFO] 10.244.0.17:40301 - 54791 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000095079s
	[INFO] 10.244.0.17:40301 - 54441 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000143705s
	[INFO] 10.244.0.17:51513 - 8105 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000076417s
	[INFO] 10.244.0.17:51513 - 7772 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000105002s
	[INFO] 10.244.0.17:49811 - 32439 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000073815s
	[INFO] 10.244.0.17:49811 - 32166 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000108278s
	[INFO] 10.244.0.17:54277 - 32137 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000120667s
	[INFO] 10.244.0.17:54277 - 32306 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000184735s
	[INFO] 10.244.0.22:39936 - 52514 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000181324s
	[INFO] 10.244.0.22:46388 - 41339 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000252263s
	[INFO] 10.244.0.22:60659 - 23466 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000101433s
	[INFO] 10.244.0.22:43088 - 40251 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115649s
	[INFO] 10.244.0.22:51568 - 17443 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000105345s
	[INFO] 10.244.0.22:49340 - 17002 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0001549s
	[INFO] 10.244.0.22:59783 - 43552 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004514939s
	[INFO] 10.244.0.22:50913 - 26617 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005359753s
	[INFO] 10.244.0.22:58840 - 20880 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.003950815s
	[INFO] 10.244.0.22:33116 - 21970 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004331009s
	[INFO] 10.244.0.22:51353 - 1685 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00524232s
	[INFO] 10.244.0.22:58439 - 28947 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005423286s
	[INFO] 10.244.0.22:38115 - 17408 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004730004s
	[INFO] 10.244.0.22:37500 - 51497 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004843339s
	[INFO] 10.244.0.22:45730 - 47396 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001424946s
	[INFO] 10.244.0.22:53694 - 19739 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002149264s
	
	
	==> describe nodes <==
	Name:               addons-491859
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-491859
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=addons-491859
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T08_29_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-491859
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-491859"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 08:29:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-491859
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 08:31:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 08:31:49 +0000   Sat, 01 Nov 2025 08:29:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 08:31:49 +0000   Sat, 01 Nov 2025 08:29:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 08:31:49 +0000   Sat, 01 Nov 2025 08:29:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 08:31:49 +0000   Sat, 01 Nov 2025 08:30:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-491859
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                5c1e6350-6319-4483-8aa0-6397d62a761e
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     cloud-spanner-emulator-86bd5cbb97-d2cmm                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  gadget                      gadget-ggvnk                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  gcp-auth                    gcp-auth-78565c9fb4-z7tgf                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-6nth2                     100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         108s
	  kube-system                 amd-gpu-device-plugin-6twrx                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 coredns-66bc5c9577-wp7lb                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 csi-hostpathplugin-b7wqd                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 etcd-addons-491859                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-7cj4p                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-addons-491859                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-addons-491859                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-h22tg                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-addons-491859                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 metrics-server-85b7d694d7-8j2pv                               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         108s
	  kube-system                 nvidia-device-plugin-daemonset-hbv5p                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 registry-6b586f9694-nlmgw                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 registry-creds-764b6fb674-rj5zk                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 registry-proxy-jncr6                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 snapshot-controller-7d9fbc56b8-7bhnn                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 snapshot-controller-7d9fbc56b8-c9dzh                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  local-path-storage          helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  local-path-storage          local-path-provisioner-648f6765c9-5mm52                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-kmsmc                                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     108s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 108s             kube-proxy       
	  Normal  Starting                 2m               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)  kubelet          Node addons-491859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)  kubelet          Node addons-491859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m)  kubelet          Node addons-491859 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s             kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s             kubelet          Node addons-491859 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s             kubelet          Node addons-491859 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s             kubelet          Node addons-491859 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           110s             node-controller  Node addons-491859 event: Registered Node addons-491859 in Controller
	  Normal  NodeReady                68s              kubelet          Node addons-491859 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 08:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001657] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.404802] i8042: Warning: Keylock active
	[  +0.013692] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.522749] block sda: the capability attribute has been deprecated.
	[  +0.095214] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.027343] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.469466] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [b29235edc538375f7d6a03229d64488db46c49997437218731a8cd64edc28444] <==
	{"level":"warn","ts":"2025-11-01T08:30:32.137630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:30:58.641093Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.235874ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:30:58.641276Z","caller":"traceutil/trace.go:172","msg":"trace[1071687753] range","detail":"{range_begin:/registry/replicasets; range_end:; response_count:0; response_revision:1050; }","duration":"184.463028ms","start":"2025-11-01T08:30:58.456793Z","end":"2025-11-01T08:30:58.641256Z","steps":["trace[1071687753] 'agreement among raft nodes before linearized reading'  (duration: 73.383718ms)","trace[1071687753] 'range keys from in-memory index tree'  (duration: 110.825372ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:30:58.644032Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.993001ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128041020883280613 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-2265b\" mod_revision:1048 > success:<request_put:<key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-2265b\" value_size:4415 >> failure:<request_range:<key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-2265b\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T08:30:58.644152Z","caller":"traceutil/trace.go:172","msg":"trace[237136442] linearizableReadLoop","detail":"{readStateIndex:1078; appliedIndex:1077; }","duration":"113.989318ms","start":"2025-11-01T08:30:58.530149Z","end":"2025-11-01T08:30:58.644138Z","steps":["trace[237136442] 'read index received'  (duration: 40.599µs)","trace[237136442] 'applied index is now lower than readState.Index'  (duration: 113.947442ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:30:58.644283Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.423891ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:30:58.644324Z","caller":"traceutil/trace.go:172","msg":"trace[540881731] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1051; }","duration":"161.452999ms","start":"2025-11-01T08:30:58.482848Z","end":"2025-11-01T08:30:58.644301Z","steps":["trace[540881731] 'agreement among raft nodes before linearized reading'  (duration: 161.345896ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:30:58.644546Z","caller":"traceutil/trace.go:172","msg":"trace[1116447530] transaction","detail":"{read_only:false; response_revision:1051; number_of_response:1; }","duration":"206.893581ms","start":"2025-11-01T08:30:58.437629Z","end":"2025-11-01T08:30:58.644522Z","steps":["trace[1116447530] 'process raft request'  (duration: 92.553284ms)","trace[1116447530] 'compare'  (duration: 110.856274ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:30:58.644735Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.627641ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:30:58.644772Z","caller":"traceutil/trace.go:172","msg":"trace[298041514] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1051; }","duration":"114.66872ms","start":"2025-11-01T08:30:58.530095Z","end":"2025-11-01T08:30:58.644764Z","steps":["trace[298041514] 'agreement among raft nodes before linearized reading'  (duration: 114.597534ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:30:58.644913Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.365904ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:30:58.644940Z","caller":"traceutil/trace.go:172","msg":"trace[840698398] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1051; }","duration":"115.392476ms","start":"2025-11-01T08:30:58.529540Z","end":"2025-11-01T08:30:58.644932Z","steps":["trace[840698398] 'agreement among raft nodes before linearized reading'  (duration: 115.344117ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:31:04.691597Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.39284ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:31:04.691652Z","caller":"traceutil/trace.go:172","msg":"trace[884222080] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1115; }","duration":"162.456817ms","start":"2025-11-01T08:31:04.529181Z","end":"2025-11-01T08:31:04.691637Z","steps":["trace[884222080] 'agreement among raft nodes before linearized reading'  (duration: 37.985334ms)","trace[884222080] 'range keys from in-memory index tree'  (duration: 124.379268ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:31:04.691660Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.403486ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128041020883280793 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/amd-gpu-device-plugin-6twrx\" mod_revision:926 > success:<request_put:<key:\"/registry/pods/kube-system/amd-gpu-device-plugin-6twrx\" value_size:4565 >> failure:<request_range:<key:\"/registry/pods/kube-system/amd-gpu-device-plugin-6twrx\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T08:31:04.691829Z","caller":"traceutil/trace.go:172","msg":"trace[994324902] transaction","detail":"{read_only:false; response_revision:1116; number_of_response:1; }","duration":"214.346723ms","start":"2025-11-01T08:31:04.477463Z","end":"2025-11-01T08:31:04.691810Z","steps":["trace[994324902] 'process raft request'  (duration: 89.731375ms)","trace[994324902] 'compare'  (duration: 124.328345ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T08:31:04.691833Z","caller":"traceutil/trace.go:172","msg":"trace[873270259] linearizableReadLoop","detail":"{readStateIndex:1144; appliedIndex:1143; }","duration":"124.674198ms","start":"2025-11-01T08:31:04.567145Z","end":"2025-11-01T08:31:04.691819Z","steps":["trace[873270259] 'read index received'  (duration: 123.621805ms)","trace[873270259] 'applied index is now lower than readState.Index'  (duration: 1.050971ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T08:31:04.691858Z","caller":"traceutil/trace.go:172","msg":"trace[1199295295] transaction","detail":"{read_only:false; response_revision:1117; number_of_response:1; }","duration":"159.964426ms","start":"2025-11-01T08:31:04.531884Z","end":"2025-11-01T08:31:04.691849Z","steps":["trace[1199295295] 'process raft request'  (duration: 159.873308ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:31:04.692004Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.923497ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:31:04.692031Z","caller":"traceutil/trace.go:172","msg":"trace[1161329594] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1118; }","duration":"161.952504ms","start":"2025-11-01T08:31:04.530070Z","end":"2025-11-01T08:31:04.692023Z","steps":["trace[1161329594] 'agreement among raft nodes before linearized reading'  (duration: 161.844345ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:31:04.692041Z","caller":"traceutil/trace.go:172","msg":"trace[2135679475] transaction","detail":"{read_only:false; response_revision:1118; number_of_response:1; }","duration":"150.532136ms","start":"2025-11-01T08:31:04.541501Z","end":"2025-11-01T08:31:04.692033Z","steps":["trace[2135679475] 'process raft request'  (duration: 150.316182ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:31:04.861925Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"163.17141ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:31:04.862002Z","caller":"traceutil/trace.go:172","msg":"trace[974332960] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1119; }","duration":"163.255158ms","start":"2025-11-01T08:31:04.698729Z","end":"2025-11-01T08:31:04.861984Z","steps":["trace[974332960] 'agreement among raft nodes before linearized reading'  (duration: 136.517634ms)","trace[974332960] 'range keys from in-memory index tree'  (duration: 26.627503ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T08:31:04.862107Z","caller":"traceutil/trace.go:172","msg":"trace[1181685282] transaction","detail":"{read_only:false; response_revision:1121; number_of_response:1; }","duration":"164.056879ms","start":"2025-11-01T08:31:04.698042Z","end":"2025-11-01T08:31:04.862099Z","steps":["trace[1181685282] 'process raft request'  (duration: 163.942969ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:31:04.862126Z","caller":"traceutil/trace.go:172","msg":"trace[1626285233] transaction","detail":"{read_only:false; response_revision:1120; number_of_response:1; }","duration":"164.617421ms","start":"2025-11-01T08:31:04.697489Z","end":"2025-11-01T08:31:04.862106Z","steps":["trace[1626285233] 'process raft request'  (duration: 137.759732ms)","trace[1626285233] 'compare'  (duration: 26.590567ms)"],"step_count":2}
	
	
	==> gcp-auth [53b633b13927f21d413e05b002d8fbcad4f3906c181691eb3e2d4a91d26ff070] <==
	2025/11/01 08:31:18 GCP Auth Webhook started!
	2025/11/01 08:31:43 Ready to marshal response ...
	2025/11/01 08:31:43 Ready to write response ...
	2025/11/01 08:31:43 Ready to marshal response ...
	2025/11/01 08:31:43 Ready to write response ...
	2025/11/01 08:31:43 Ready to marshal response ...
	2025/11/01 08:31:43 Ready to write response ...
	2025/11/01 08:31:51 Ready to marshal response ...
	2025/11/01 08:31:51 Ready to write response ...
	2025/11/01 08:31:51 Ready to marshal response ...
	2025/11/01 08:31:51 Ready to write response ...
	
	
	==> kernel <==
	 08:31:52 up 14 min,  0 user,  load average: 0.95, 0.57, 0.23
	Linux addons-491859 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c60507f296e9571f568638d8fac5f87c704242925cf7b4aa378db809c3e3176b] <==
	I1101 08:30:03.953131       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 08:30:03.953467       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 08:30:03.953493       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 08:30:03.953684       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 08:30:34.037429       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 08:30:34.049996       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 08:30:34.051171       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 08:30:34.051187       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1101 08:30:35.653948       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 08:30:35.653977       1 metrics.go:72] Registering metrics
	I1101 08:30:35.654063       1 controller.go:711] "Syncing nftables rules"
	I1101 08:30:43.947927       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:30:43.948008       1 main.go:301] handling current node
	I1101 08:30:53.947754       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:30:53.947790       1 main.go:301] handling current node
	I1101 08:31:03.947167       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:31:03.947204       1 main.go:301] handling current node
	I1101 08:31:13.947669       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:31:13.947708       1 main.go:301] handling current node
	I1101 08:31:23.947367       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:31:23.947425       1 main.go:301] handling current node
	I1101 08:31:33.947428       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:31:33.947471       1 main.go:301] handling current node
	I1101 08:31:43.947687       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:31:43.947716       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d4c72eaef44361ee96a4bd979c97bf60778beaa33b136b1407ada4c192a11240] <==
	W1101 08:30:47.748658       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:30:47.748743       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 08:30:47.748938       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.158.97:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.158.97:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.158.97:443: connect: connection refused" logger="UnhandledError"
	E1101 08:30:47.750580       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.158.97:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.158.97:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.158.97:443: connect: connection refused" logger="UnhandledError"
	W1101 08:30:48.749069       1 handler_proxy.go:99] no RequestInfo found in the context
	W1101 08:30:48.749105       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:30:48.749138       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1101 08:30:48.749156       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1101 08:30:48.749167       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1101 08:30:48.750333       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1101 08:30:52.760503       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.158.97:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.158.97:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W1101 08:30:52.760559       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:30:52.760603       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1101 08:30:52.770851       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 08:31:50.557803       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60866: use of closed network connection
	E1101 08:31:50.708797       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60912: use of closed network connection
	
	
	==> kube-controller-manager [808e84f4795d8f4c21c0316fc7e7437f547a0d653508d0419288257452ccf97b] <==
	I1101 08:30:02.088639       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 08:30:02.088652       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 08:30:02.088660       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 08:30:02.088804       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 08:30:02.088844       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 08:30:02.088924       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 08:30:02.088939       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 08:30:02.089175       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 08:30:02.089545       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 08:30:02.089585       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 08:30:02.089603       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 08:30:02.090742       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 08:30:02.091903       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 08:30:02.094199       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 08:30:02.094240       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 08:30:02.104432       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 08:30:02.112098       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1101 08:30:32.098335       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:30:32.098472       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1101 08:30:32.098504       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 08:30:32.121699       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 08:30:32.125108       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 08:30:32.199240       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 08:30:32.225463       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 08:30:47.040659       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4c1ad1a76dfd8c2d5f52abbb115ac02bc7871e82ffe8eed8b2bce0574ab65ce3] <==
	I1101 08:30:03.433198       1 server_linux.go:53] "Using iptables proxy"
	I1101 08:30:03.609646       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 08:30:03.711817       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 08:30:03.711877       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 08:30:03.711960       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 08:30:03.741388       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 08:30:03.741516       1 server_linux.go:132] "Using iptables Proxier"
	I1101 08:30:03.748942       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 08:30:03.756071       1 server.go:527] "Version info" version="v1.34.1"
	I1101 08:30:03.756107       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:30:03.759660       1 config.go:200] "Starting service config controller"
	I1101 08:30:03.759682       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 08:30:03.759714       1 config.go:106] "Starting endpoint slice config controller"
	I1101 08:30:03.759721       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 08:30:03.759737       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 08:30:03.759742       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 08:30:03.760675       1 config.go:309] "Starting node config controller"
	I1101 08:30:03.760685       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 08:30:03.760693       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 08:30:03.859942       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 08:30:03.860779       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 08:30:03.864557       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [cdda903ada7547082c69c9584210b581ad9bfe62602052de013ddc3c59d043bc] <==
	E1101 08:29:54.659730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:29:54.659837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:29:54.660024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:29:54.660044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 08:29:54.660092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 08:29:54.660090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 08:29:54.660134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 08:29:54.660185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 08:29:55.508830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:29:55.537173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:29:55.558893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 08:29:55.564922       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:29:55.581322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 08:29:55.603924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 08:29:55.603943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:29:55.608299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 08:29:55.645608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 08:29:55.678147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 08:29:55.735284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 08:29:55.751361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 08:29:55.831703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 08:29:55.836835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 08:29:55.851064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 08:29:55.877552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1101 08:29:57.851557       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 08:31:12 addons-491859 kubelet[1302]: I1101 08:31:12.248577    1302 scope.go:117] "RemoveContainer" containerID="e04cfe5f2eb989f42333317e903ba709c92267b8843476f2b7b7604accedb58b"
	Nov 01 08:31:13 addons-491859 kubelet[1302]: I1101 08:31:13.520368    1302 scope.go:117] "RemoveContainer" containerID="e04cfe5f2eb989f42333317e903ba709c92267b8843476f2b7b7604accedb58b"
	Nov 01 08:31:13 addons-491859 kubelet[1302]: I1101 08:31:13.549453    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-ggvnk" podStartSLOduration=66.236056536 podStartE2EDuration="1m9.549434678s" podCreationTimestamp="2025-11-01 08:30:04 +0000 UTC" firstStartedPulling="2025-11-01 08:31:09.220724132 +0000 UTC m=+72.056120611" lastFinishedPulling="2025-11-01 08:31:12.534102272 +0000 UTC m=+75.369498753" observedRunningTime="2025-11-01 08:31:13.53761623 +0000 UTC m=+76.373012721" watchObservedRunningTime="2025-11-01 08:31:13.549434678 +0000 UTC m=+76.384831165"
	Nov 01 08:31:14 addons-491859 kubelet[1302]: I1101 08:31:14.298943    1302 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 01 08:31:14 addons-491859 kubelet[1302]: I1101 08:31:14.298986    1302 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 01 08:31:14 addons-491859 kubelet[1302]: I1101 08:31:14.664256    1302 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlx9t\" (UniqueName: \"kubernetes.io/projected/835e3286-2d34-407f-a36a-d350869ea5fb-kube-api-access-vlx9t\") pod \"835e3286-2d34-407f-a36a-d350869ea5fb\" (UID: \"835e3286-2d34-407f-a36a-d350869ea5fb\") "
	Nov 01 08:31:14 addons-491859 kubelet[1302]: I1101 08:31:14.667277    1302 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/835e3286-2d34-407f-a36a-d350869ea5fb-kube-api-access-vlx9t" (OuterVolumeSpecName: "kube-api-access-vlx9t") pod "835e3286-2d34-407f-a36a-d350869ea5fb" (UID: "835e3286-2d34-407f-a36a-d350869ea5fb"). InnerVolumeSpecName "kube-api-access-vlx9t". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 01 08:31:14 addons-491859 kubelet[1302]: I1101 08:31:14.765251    1302 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vlx9t\" (UniqueName: \"kubernetes.io/projected/835e3286-2d34-407f-a36a-d350869ea5fb-kube-api-access-vlx9t\") on node \"addons-491859\" DevicePath \"\""
	Nov 01 08:31:15 addons-491859 kubelet[1302]: I1101 08:31:15.539301    1302 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2f94f12c19b07854ee684ca254983829eea531e0d2dc1d527a24866384bf8d7"
	Nov 01 08:31:16 addons-491859 kubelet[1302]: E1101 08:31:16.075402    1302 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 01 08:31:16 addons-491859 kubelet[1302]: E1101 08:31:16.075493    1302 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f281acc-558f-462c-bf98-c52c7b8b34a1-gcr-creds podName:5f281acc-558f-462c-bf98-c52c7b8b34a1 nodeName:}" failed. No retries permitted until 2025-11-01 08:31:48.07547314 +0000 UTC m=+110.910869625 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/5f281acc-558f-462c-bf98-c52c7b8b34a1-gcr-creds") pod "registry-creds-764b6fb674-rj5zk" (UID: "5f281acc-558f-462c-bf98-c52c7b8b34a1") : secret "registry-creds-gcr" not found
	Nov 01 08:31:16 addons-491859 kubelet[1302]: I1101 08:31:16.564601    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-b7wqd" podStartSLOduration=1.518667472 podStartE2EDuration="32.564580267s" podCreationTimestamp="2025-11-01 08:30:44 +0000 UTC" firstStartedPulling="2025-11-01 08:30:44.69467416 +0000 UTC m=+47.530070629" lastFinishedPulling="2025-11-01 08:31:15.740586947 +0000 UTC m=+78.575983424" observedRunningTime="2025-11-01 08:31:16.563317243 +0000 UTC m=+79.398713780" watchObservedRunningTime="2025-11-01 08:31:16.564580267 +0000 UTC m=+79.399976757"
	Nov 01 08:31:18 addons-491859 kubelet[1302]: I1101 08:31:18.579496    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-z7tgf" podStartSLOduration=65.949711438 podStartE2EDuration="1m7.579470098s" podCreationTimestamp="2025-11-01 08:30:11 +0000 UTC" firstStartedPulling="2025-11-01 08:31:16.4170664 +0000 UTC m=+79.252462870" lastFinishedPulling="2025-11-01 08:31:18.046825058 +0000 UTC m=+80.882221530" observedRunningTime="2025-11-01 08:31:18.578061697 +0000 UTC m=+81.413458187" watchObservedRunningTime="2025-11-01 08:31:18.579470098 +0000 UTC m=+81.414866592"
	Nov 01 08:31:22 addons-491859 kubelet[1302]: I1101 08:31:22.588288    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-6nth2" podStartSLOduration=73.187393169 podStartE2EDuration="1m18.588267899s" podCreationTimestamp="2025-11-01 08:30:04 +0000 UTC" firstStartedPulling="2025-11-01 08:31:16.420175049 +0000 UTC m=+79.255571521" lastFinishedPulling="2025-11-01 08:31:21.821049769 +0000 UTC m=+84.656446251" observedRunningTime="2025-11-01 08:31:22.58736995 +0000 UTC m=+85.422766450" watchObservedRunningTime="2025-11-01 08:31:22.588267899 +0000 UTC m=+85.423664390"
	Nov 01 08:31:35 addons-491859 kubelet[1302]: I1101 08:31:35.250923    1302 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2a6c55a-485e-4b2a-bdc8-cab251d51484" path="/var/lib/kubelet/pods/f2a6c55a-485e-4b2a-bdc8-cab251d51484/volumes"
	Nov 01 08:31:43 addons-491859 kubelet[1302]: I1101 08:31:43.493833    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngmcv\" (UniqueName: \"kubernetes.io/projected/2ae16e08-af4f-4803-85d8-2d9acd18bb15-kube-api-access-ngmcv\") pod \"busybox\" (UID: \"2ae16e08-af4f-4803-85d8-2d9acd18bb15\") " pod="default/busybox"
	Nov 01 08:31:43 addons-491859 kubelet[1302]: I1101 08:31:43.493933    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2ae16e08-af4f-4803-85d8-2d9acd18bb15-gcp-creds\") pod \"busybox\" (UID: \"2ae16e08-af4f-4803-85d8-2d9acd18bb15\") " pod="default/busybox"
	Nov 01 08:31:44 addons-491859 kubelet[1302]: I1101 08:31:44.671066    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.984855437 podStartE2EDuration="1.671044482s" podCreationTimestamp="2025-11-01 08:31:43 +0000 UTC" firstStartedPulling="2025-11-01 08:31:43.743739141 +0000 UTC m=+106.579135610" lastFinishedPulling="2025-11-01 08:31:44.429928186 +0000 UTC m=+107.265324655" observedRunningTime="2025-11-01 08:31:44.670219872 +0000 UTC m=+107.505616359" watchObservedRunningTime="2025-11-01 08:31:44.671044482 +0000 UTC m=+107.506440975"
	Nov 01 08:31:45 addons-491859 kubelet[1302]: I1101 08:31:45.250207    1302 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="835e3286-2d34-407f-a36a-d350869ea5fb" path="/var/lib/kubelet/pods/835e3286-2d34-407f-a36a-d350869ea5fb/volumes"
	Nov 01 08:31:48 addons-491859 kubelet[1302]: E1101 08:31:48.132024    1302 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 01 08:31:48 addons-491859 kubelet[1302]: E1101 08:31:48.132106    1302 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f281acc-558f-462c-bf98-c52c7b8b34a1-gcr-creds podName:5f281acc-558f-462c-bf98-c52c7b8b34a1 nodeName:}" failed. No retries permitted until 2025-11-01 08:32:52.132093018 +0000 UTC m=+174.967489499 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/5f281acc-558f-462c-bf98-c52c7b8b34a1-gcr-creds") pod "registry-creds-764b6fb674-rj5zk" (UID: "5f281acc-558f-462c-bf98-c52c7b8b34a1") : secret "registry-creds-gcr" not found
	Nov 01 08:31:51 addons-491859 kubelet[1302]: I1101 08:31:51.356485    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsrfr\" (UniqueName: \"kubernetes.io/projected/e8e855fe-77b5-4112-8dfe-31ba402b5928-kube-api-access-qsrfr\") pod \"helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634\" (UID: \"e8e855fe-77b5-4112-8dfe-31ba402b5928\") " pod="local-path-storage/helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634"
	Nov 01 08:31:51 addons-491859 kubelet[1302]: I1101 08:31:51.356546    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/e8e855fe-77b5-4112-8dfe-31ba402b5928-script\") pod \"helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634\" (UID: \"e8e855fe-77b5-4112-8dfe-31ba402b5928\") " pod="local-path-storage/helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634"
	Nov 01 08:31:51 addons-491859 kubelet[1302]: I1101 08:31:51.356576    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e8e855fe-77b5-4112-8dfe-31ba402b5928-gcp-creds\") pod \"helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634\" (UID: \"e8e855fe-77b5-4112-8dfe-31ba402b5928\") " pod="local-path-storage/helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634"
	Nov 01 08:31:51 addons-491859 kubelet[1302]: I1101 08:31:51.356667    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/e8e855fe-77b5-4112-8dfe-31ba402b5928-data\") pod \"helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634\" (UID: \"e8e855fe-77b5-4112-8dfe-31ba402b5928\") " pod="local-path-storage/helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634"
	
	
	==> storage-provisioner [87757a0f68b4c84bb6b6a0830633d098f646e6d9b6aa521bccfaeb77b540635f] <==
	W1101 08:31:27.003692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:29.006602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:29.011025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:31.014136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:31.017824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:33.020508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:33.025324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:35.028520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:35.032149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:37.034982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:37.040354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:39.043436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:39.047109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:41.049951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:41.055343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:43.057940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:43.062177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:45.064954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:45.068856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:47.071727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:47.075387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:49.078627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:49.082055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:51.084992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:31:51.089904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-491859 -n addons-491859
helpers_test.go:269: (dbg) Run:  kubectl --context addons-491859 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: test-local-path ingress-nginx-admission-create-hh4rd ingress-nginx-admission-patch-lsz25 registry-creds-764b6fb674-rj5zk helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-491859 describe pod test-local-path ingress-nginx-admission-create-hh4rd ingress-nginx-admission-patch-lsz25 registry-creds-764b6fb674-rj5zk helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-491859 describe pod test-local-path ingress-nginx-admission-create-hh4rd ingress-nginx-admission-patch-lsz25 registry-creds-764b6fb674-rj5zk helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634: exit status 1 (70.459183ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85pr5 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-85pr5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hh4rd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-lsz25" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-rj5zk" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-491859 describe pod test-local-path ingress-nginx-admission-create-hh4rd ingress-nginx-admission-patch-lsz25 registry-creds-764b6fb674-rj5zk helper-pod-create-pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-491859 addons disable headlamp --alsologtostderr -v=1: exit status 11 (249.481311ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:31:53.600235   20220 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:31:53.600493   20220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:31:53.600502   20220 out.go:374] Setting ErrFile to fd 2...
	I1101 08:31:53.600513   20220 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:31:53.600723   20220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:31:53.601018   20220 mustload.go:66] Loading cluster: addons-491859
	I1101 08:31:53.601372   20220 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:31:53.601386   20220 addons.go:607] checking whether the cluster is paused
	I1101 08:31:53.601463   20220 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:31:53.601473   20220 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:31:53.601846   20220 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:31:53.620805   20220 ssh_runner.go:195] Run: systemctl --version
	I1101 08:31:53.620883   20220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:31:53.638854   20220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:31:53.738296   20220 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:31:53.738412   20220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:31:53.767923   20220 cri.go:89] found id: "33e4e1fc1e33072f888a46aa17d3beb4e58f11877a9d27925e1e5d968eb6c903"
	I1101 08:31:53.767946   20220 cri.go:89] found id: "f04af0fd3a62dfc9f83c9abac1b7ccb6528a248e3fd3ee02ea1f2a7350778e83"
	I1101 08:31:53.767952   20220 cri.go:89] found id: "a1f3a49b7f394fcf06e3ec79eef028b31fb8f10e2d0269aa4eec27450086f2e9"
	I1101 08:31:53.767956   20220 cri.go:89] found id: "ff4bdf52bbb882d70c007d186f38568cb5286b9a2116e10107044414d1c422b0"
	I1101 08:31:53.767961   20220 cri.go:89] found id: "9d1050c081be96d28152bfd4e229378b4cc1d8c31d74f567fbc905b5e676cbe5"
	I1101 08:31:53.767966   20220 cri.go:89] found id: "2c81dda5dfe97017e1ea451a903bb723503013671dfd4ad2848dbd7ed4c00fda"
	I1101 08:31:53.767970   20220 cri.go:89] found id: "0f17b27c9fb94821e21590f954f59af583f7f28766b74bcf54fd77fd4403631f"
	I1101 08:31:53.767974   20220 cri.go:89] found id: "3070142e889654833dbabc836972d24ca0160e211e6a01dc410037b3d06aa377"
	I1101 08:31:53.767978   20220 cri.go:89] found id: "e0fe6aa919f9f7ec3e5dd5de78f0ba1c29746db4b58ff19fe034196dcb04a040"
	I1101 08:31:53.767998   20220 cri.go:89] found id: "dd32f839b496afac7e54669ede10e44b695513bd1f08cb2572d080421d76ed1f"
	I1101 08:31:53.768007   20220 cri.go:89] found id: "b8dc66998b8c65737a3fc68f94611d5a75e4841817858e50cf8f41fe3d0b9111"
	I1101 08:31:53.768011   20220 cri.go:89] found id: "c4c4e8392feed85ce6d8b52f77463bc2a8238dd093e730bd11ad824f180a3227"
	I1101 08:31:53.768018   20220 cri.go:89] found id: "a4c41d6f050f2ca6af53a5d7a6a54f2b04fb24731eca6d7272b14503b747f50d"
	I1101 08:31:53.768023   20220 cri.go:89] found id: "2b4413f8423a31353523e4d44f7675fac21836f4e3b491f3d3f19955b8251025"
	I1101 08:31:53.768029   20220 cri.go:89] found id: "73d495a359ef08303218d0bd2af8743a68b70af8ffdfadd49ac606f145b559b6"
	I1101 08:31:53.768038   20220 cri.go:89] found id: "18fc9837ab4ea8c07f85c79610c9eda88508e53a37801274e8022d17c69f1a98"
	I1101 08:31:53.768043   20220 cri.go:89] found id: "87757a0f68b4c84bb6b6a0830633d098f646e6d9b6aa521bccfaeb77b540635f"
	I1101 08:31:53.768059   20220 cri.go:89] found id: "f17c2b6b25fbc93674f08548b5309b9715e2cb6d7600ac5d6557505072b3fb3f"
	I1101 08:31:53.768062   20220 cri.go:89] found id: "c60507f296e9571f568638d8fac5f87c704242925cf7b4aa378db809c3e3176b"
	I1101 08:31:53.768064   20220 cri.go:89] found id: "4c1ad1a76dfd8c2d5f52abbb115ac02bc7871e82ffe8eed8b2bce0574ab65ce3"
	I1101 08:31:53.768067   20220 cri.go:89] found id: "808e84f4795d8f4c21c0316fc7e7437f547a0d653508d0419288257452ccf97b"
	I1101 08:31:53.768070   20220 cri.go:89] found id: "d4c72eaef44361ee96a4bd979c97bf60778beaa33b136b1407ada4c192a11240"
	I1101 08:31:53.768074   20220 cri.go:89] found id: "cdda903ada7547082c69c9584210b581ad9bfe62602052de013ddc3c59d043bc"
	I1101 08:31:53.768078   20220 cri.go:89] found id: "b29235edc538375f7d6a03229d64488db46c49997437218731a8cd64edc28444"
	I1101 08:31:53.768081   20220 cri.go:89] found id: ""
	I1101 08:31:53.768134   20220 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:31:53.783256   20220 out.go:203] 
	W1101 08:31:53.784853   20220 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:31:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:31:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:31:53.784894   20220 out.go:285] * 
	* 
	W1101 08:31:53.788308   20220 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:31:53.789616   20220 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-491859 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.84s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-d2cmm" [ff9b0388-f97e-403d-ae3c-30d174accfb7] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.032817801s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-491859 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (261.155844ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:31:58.887769   20636 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:31:58.887970   20636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:31:58.887981   20636 out.go:374] Setting ErrFile to fd 2...
	I1101 08:31:58.887986   20636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:31:58.888188   20636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:31:58.888454   20636 mustload.go:66] Loading cluster: addons-491859
	I1101 08:31:58.888902   20636 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:31:58.888922   20636 addons.go:607] checking whether the cluster is paused
	I1101 08:31:58.889014   20636 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:31:58.889025   20636 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:31:58.889407   20636 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:31:58.910014   20636 ssh_runner.go:195] Run: systemctl --version
	I1101 08:31:58.910055   20636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:31:58.930475   20636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:31:59.030408   20636 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:31:59.030542   20636 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:31:59.062140   20636 cri.go:89] found id: "33e4e1fc1e33072f888a46aa17d3beb4e58f11877a9d27925e1e5d968eb6c903"
	I1101 08:31:59.062162   20636 cri.go:89] found id: "f04af0fd3a62dfc9f83c9abac1b7ccb6528a248e3fd3ee02ea1f2a7350778e83"
	I1101 08:31:59.062167   20636 cri.go:89] found id: "a1f3a49b7f394fcf06e3ec79eef028b31fb8f10e2d0269aa4eec27450086f2e9"
	I1101 08:31:59.062172   20636 cri.go:89] found id: "ff4bdf52bbb882d70c007d186f38568cb5286b9a2116e10107044414d1c422b0"
	I1101 08:31:59.062177   20636 cri.go:89] found id: "9d1050c081be96d28152bfd4e229378b4cc1d8c31d74f567fbc905b5e676cbe5"
	I1101 08:31:59.062182   20636 cri.go:89] found id: "2c81dda5dfe97017e1ea451a903bb723503013671dfd4ad2848dbd7ed4c00fda"
	I1101 08:31:59.062187   20636 cri.go:89] found id: "0f17b27c9fb94821e21590f954f59af583f7f28766b74bcf54fd77fd4403631f"
	I1101 08:31:59.062191   20636 cri.go:89] found id: "3070142e889654833dbabc836972d24ca0160e211e6a01dc410037b3d06aa377"
	I1101 08:31:59.062196   20636 cri.go:89] found id: "e0fe6aa919f9f7ec3e5dd5de78f0ba1c29746db4b58ff19fe034196dcb04a040"
	I1101 08:31:59.062203   20636 cri.go:89] found id: "dd32f839b496afac7e54669ede10e44b695513bd1f08cb2572d080421d76ed1f"
	I1101 08:31:59.062211   20636 cri.go:89] found id: "b8dc66998b8c65737a3fc68f94611d5a75e4841817858e50cf8f41fe3d0b9111"
	I1101 08:31:59.062215   20636 cri.go:89] found id: "c4c4e8392feed85ce6d8b52f77463bc2a8238dd093e730bd11ad824f180a3227"
	I1101 08:31:59.062219   20636 cri.go:89] found id: "a4c41d6f050f2ca6af53a5d7a6a54f2b04fb24731eca6d7272b14503b747f50d"
	I1101 08:31:59.062222   20636 cri.go:89] found id: "2b4413f8423a31353523e4d44f7675fac21836f4e3b491f3d3f19955b8251025"
	I1101 08:31:59.062227   20636 cri.go:89] found id: "73d495a359ef08303218d0bd2af8743a68b70af8ffdfadd49ac606f145b559b6"
	I1101 08:31:59.062235   20636 cri.go:89] found id: "18fc9837ab4ea8c07f85c79610c9eda88508e53a37801274e8022d17c69f1a98"
	I1101 08:31:59.062242   20636 cri.go:89] found id: "87757a0f68b4c84bb6b6a0830633d098f646e6d9b6aa521bccfaeb77b540635f"
	I1101 08:31:59.062248   20636 cri.go:89] found id: "f17c2b6b25fbc93674f08548b5309b9715e2cb6d7600ac5d6557505072b3fb3f"
	I1101 08:31:59.062251   20636 cri.go:89] found id: "c60507f296e9571f568638d8fac5f87c704242925cf7b4aa378db809c3e3176b"
	I1101 08:31:59.062255   20636 cri.go:89] found id: "4c1ad1a76dfd8c2d5f52abbb115ac02bc7871e82ffe8eed8b2bce0574ab65ce3"
	I1101 08:31:59.062274   20636 cri.go:89] found id: "808e84f4795d8f4c21c0316fc7e7437f547a0d653508d0419288257452ccf97b"
	I1101 08:31:59.062280   20636 cri.go:89] found id: "d4c72eaef44361ee96a4bd979c97bf60778beaa33b136b1407ada4c192a11240"
	I1101 08:31:59.062285   20636 cri.go:89] found id: "cdda903ada7547082c69c9584210b581ad9bfe62602052de013ddc3c59d043bc"
	I1101 08:31:59.062292   20636 cri.go:89] found id: "b29235edc538375f7d6a03229d64488db46c49997437218731a8cd64edc28444"
	I1101 08:31:59.062296   20636 cri.go:89] found id: ""
	I1101 08:31:59.062343   20636 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:31:59.079579   20636 out.go:203] 
	W1101 08:31:59.081631   20636 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:31:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:31:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:31:59.081657   20636 out.go:285] * 
	* 
	W1101 08:31:59.086385   20636 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:31:59.087393   20636 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-491859 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.30s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-491859 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-491859 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-491859 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [67157c57-8dbf-48fa-8f5d-0decaf14e57d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [67157c57-8dbf-48fa-8f5d-0decaf14e57d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [67157c57-8dbf-48fa-8f5d-0decaf14e57d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.002960347s
addons_test.go:967: (dbg) Run:  kubectl --context addons-491859 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 ssh "cat /opt/local-path-provisioner/pvc-0ee8f669-5c4e-41a4-b21d-0752409d6634_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-491859 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-491859 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-491859 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (265.72705ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:31:58.887732   20635 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:31:58.888119   20635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:31:58.888131   20635 out.go:374] Setting ErrFile to fd 2...
	I1101 08:31:58.888135   20635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:31:58.888386   20635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:31:58.888746   20635 mustload.go:66] Loading cluster: addons-491859
	I1101 08:31:58.889127   20635 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:31:58.889143   20635 addons.go:607] checking whether the cluster is paused
	I1101 08:31:58.889229   20635 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:31:58.889240   20635 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:31:58.889678   20635 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:31:58.909709   20635 ssh_runner.go:195] Run: systemctl --version
	I1101 08:31:58.909784   20635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:31:58.930974   20635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:31:59.030481   20635 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:31:59.030577   20635 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:31:59.060904   20635 cri.go:89] found id: "33e4e1fc1e33072f888a46aa17d3beb4e58f11877a9d27925e1e5d968eb6c903"
	I1101 08:31:59.060931   20635 cri.go:89] found id: "f04af0fd3a62dfc9f83c9abac1b7ccb6528a248e3fd3ee02ea1f2a7350778e83"
	I1101 08:31:59.060937   20635 cri.go:89] found id: "a1f3a49b7f394fcf06e3ec79eef028b31fb8f10e2d0269aa4eec27450086f2e9"
	I1101 08:31:59.060942   20635 cri.go:89] found id: "ff4bdf52bbb882d70c007d186f38568cb5286b9a2116e10107044414d1c422b0"
	I1101 08:31:59.060948   20635 cri.go:89] found id: "9d1050c081be96d28152bfd4e229378b4cc1d8c31d74f567fbc905b5e676cbe5"
	I1101 08:31:59.060953   20635 cri.go:89] found id: "2c81dda5dfe97017e1ea451a903bb723503013671dfd4ad2848dbd7ed4c00fda"
	I1101 08:31:59.060966   20635 cri.go:89] found id: "0f17b27c9fb94821e21590f954f59af583f7f28766b74bcf54fd77fd4403631f"
	I1101 08:31:59.060970   20635 cri.go:89] found id: "3070142e889654833dbabc836972d24ca0160e211e6a01dc410037b3d06aa377"
	I1101 08:31:59.060974   20635 cri.go:89] found id: "e0fe6aa919f9f7ec3e5dd5de78f0ba1c29746db4b58ff19fe034196dcb04a040"
	I1101 08:31:59.060982   20635 cri.go:89] found id: "dd32f839b496afac7e54669ede10e44b695513bd1f08cb2572d080421d76ed1f"
	I1101 08:31:59.060986   20635 cri.go:89] found id: "b8dc66998b8c65737a3fc68f94611d5a75e4841817858e50cf8f41fe3d0b9111"
	I1101 08:31:59.060990   20635 cri.go:89] found id: "c4c4e8392feed85ce6d8b52f77463bc2a8238dd093e730bd11ad824f180a3227"
	I1101 08:31:59.060994   20635 cri.go:89] found id: "a4c41d6f050f2ca6af53a5d7a6a54f2b04fb24731eca6d7272b14503b747f50d"
	I1101 08:31:59.060999   20635 cri.go:89] found id: "2b4413f8423a31353523e4d44f7675fac21836f4e3b491f3d3f19955b8251025"
	I1101 08:31:59.061003   20635 cri.go:89] found id: "73d495a359ef08303218d0bd2af8743a68b70af8ffdfadd49ac606f145b559b6"
	I1101 08:31:59.061018   20635 cri.go:89] found id: "18fc9837ab4ea8c07f85c79610c9eda88508e53a37801274e8022d17c69f1a98"
	I1101 08:31:59.061028   20635 cri.go:89] found id: "87757a0f68b4c84bb6b6a0830633d098f646e6d9b6aa521bccfaeb77b540635f"
	I1101 08:31:59.061033   20635 cri.go:89] found id: "f17c2b6b25fbc93674f08548b5309b9715e2cb6d7600ac5d6557505072b3fb3f"
	I1101 08:31:59.061037   20635 cri.go:89] found id: "c60507f296e9571f568638d8fac5f87c704242925cf7b4aa378db809c3e3176b"
	I1101 08:31:59.061041   20635 cri.go:89] found id: "4c1ad1a76dfd8c2d5f52abbb115ac02bc7871e82ffe8eed8b2bce0574ab65ce3"
	I1101 08:31:59.061044   20635 cri.go:89] found id: "808e84f4795d8f4c21c0316fc7e7437f547a0d653508d0419288257452ccf97b"
	I1101 08:31:59.061048   20635 cri.go:89] found id: "d4c72eaef44361ee96a4bd979c97bf60778beaa33b136b1407ada4c192a11240"
	I1101 08:31:59.061052   20635 cri.go:89] found id: "cdda903ada7547082c69c9584210b581ad9bfe62602052de013ddc3c59d043bc"
	I1101 08:31:59.061056   20635 cri.go:89] found id: "b29235edc538375f7d6a03229d64488db46c49997437218731a8cd64edc28444"
	I1101 08:31:59.061060   20635 cri.go:89] found id: ""
	I1101 08:31:59.061113   20635 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:31:59.078626   20635 out.go:203] 
	W1101 08:31:59.080586   20635 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:31:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:31:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:31:59.080608   20635 out.go:285] * 
	* 
	W1101 08:31:59.085336   20635 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:31:59.086726   20635 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-491859 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-hbv5p" [838833dc-5806-4421-822f-e50f71ba642b] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.002640809s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-491859 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (244.972713ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:31:57.016390   20411 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:31:57.016711   20411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:31:57.016723   20411 out.go:374] Setting ErrFile to fd 2...
	I1101 08:31:57.016728   20411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:31:57.016955   20411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:31:57.017254   20411 mustload.go:66] Loading cluster: addons-491859
	I1101 08:31:57.017636   20411 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:31:57.017653   20411 addons.go:607] checking whether the cluster is paused
	I1101 08:31:57.017749   20411 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:31:57.017765   20411 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:31:57.018168   20411 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:31:57.035945   20411 ssh_runner.go:195] Run: systemctl --version
	I1101 08:31:57.035998   20411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:31:57.054028   20411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:31:57.152791   20411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:31:57.152852   20411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:31:57.181781   20411 cri.go:89] found id: "33e4e1fc1e33072f888a46aa17d3beb4e58f11877a9d27925e1e5d968eb6c903"
	I1101 08:31:57.181801   20411 cri.go:89] found id: "f04af0fd3a62dfc9f83c9abac1b7ccb6528a248e3fd3ee02ea1f2a7350778e83"
	I1101 08:31:57.181804   20411 cri.go:89] found id: "a1f3a49b7f394fcf06e3ec79eef028b31fb8f10e2d0269aa4eec27450086f2e9"
	I1101 08:31:57.181808   20411 cri.go:89] found id: "ff4bdf52bbb882d70c007d186f38568cb5286b9a2116e10107044414d1c422b0"
	I1101 08:31:57.181810   20411 cri.go:89] found id: "9d1050c081be96d28152bfd4e229378b4cc1d8c31d74f567fbc905b5e676cbe5"
	I1101 08:31:57.181813   20411 cri.go:89] found id: "2c81dda5dfe97017e1ea451a903bb723503013671dfd4ad2848dbd7ed4c00fda"
	I1101 08:31:57.181816   20411 cri.go:89] found id: "0f17b27c9fb94821e21590f954f59af583f7f28766b74bcf54fd77fd4403631f"
	I1101 08:31:57.181818   20411 cri.go:89] found id: "3070142e889654833dbabc836972d24ca0160e211e6a01dc410037b3d06aa377"
	I1101 08:31:57.181820   20411 cri.go:89] found id: "e0fe6aa919f9f7ec3e5dd5de78f0ba1c29746db4b58ff19fe034196dcb04a040"
	I1101 08:31:57.181825   20411 cri.go:89] found id: "dd32f839b496afac7e54669ede10e44b695513bd1f08cb2572d080421d76ed1f"
	I1101 08:31:57.181827   20411 cri.go:89] found id: "b8dc66998b8c65737a3fc68f94611d5a75e4841817858e50cf8f41fe3d0b9111"
	I1101 08:31:57.181830   20411 cri.go:89] found id: "c4c4e8392feed85ce6d8b52f77463bc2a8238dd093e730bd11ad824f180a3227"
	I1101 08:31:57.181833   20411 cri.go:89] found id: "a4c41d6f050f2ca6af53a5d7a6a54f2b04fb24731eca6d7272b14503b747f50d"
	I1101 08:31:57.181836   20411 cri.go:89] found id: "2b4413f8423a31353523e4d44f7675fac21836f4e3b491f3d3f19955b8251025"
	I1101 08:31:57.181838   20411 cri.go:89] found id: "73d495a359ef08303218d0bd2af8743a68b70af8ffdfadd49ac606f145b559b6"
	I1101 08:31:57.181842   20411 cri.go:89] found id: "18fc9837ab4ea8c07f85c79610c9eda88508e53a37801274e8022d17c69f1a98"
	I1101 08:31:57.181844   20411 cri.go:89] found id: "87757a0f68b4c84bb6b6a0830633d098f646e6d9b6aa521bccfaeb77b540635f"
	I1101 08:31:57.181848   20411 cri.go:89] found id: "f17c2b6b25fbc93674f08548b5309b9715e2cb6d7600ac5d6557505072b3fb3f"
	I1101 08:31:57.181850   20411 cri.go:89] found id: "c60507f296e9571f568638d8fac5f87c704242925cf7b4aa378db809c3e3176b"
	I1101 08:31:57.181853   20411 cri.go:89] found id: "4c1ad1a76dfd8c2d5f52abbb115ac02bc7871e82ffe8eed8b2bce0574ab65ce3"
	I1101 08:31:57.181855   20411 cri.go:89] found id: "808e84f4795d8f4c21c0316fc7e7437f547a0d653508d0419288257452ccf97b"
	I1101 08:31:57.181875   20411 cri.go:89] found id: "d4c72eaef44361ee96a4bd979c97bf60778beaa33b136b1407ada4c192a11240"
	I1101 08:31:57.181880   20411 cri.go:89] found id: "cdda903ada7547082c69c9584210b581ad9bfe62602052de013ddc3c59d043bc"
	I1101 08:31:57.181884   20411 cri.go:89] found id: "b29235edc538375f7d6a03229d64488db46c49997437218731a8cd64edc28444"
	I1101 08:31:57.181888   20411 cri.go:89] found id: ""
	I1101 08:31:57.181928   20411 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:31:57.196098   20411 out.go:203] 
	W1101 08:31:57.197364   20411 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:31:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:31:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:31:57.197393   20411 out.go:285] * 
	* 
	W1101 08:31:57.200366   20411 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:31:57.201799   20411 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-491859 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-kmsmc" [5279d93f-fa80-4084-b23d-79e41bfa5241] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003530305s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-491859 addons disable yakd --alsologtostderr -v=1: exit status 11 (252.263545ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:32:07.595009   21875 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:32:07.595332   21875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:07.595344   21875 out.go:374] Setting ErrFile to fd 2...
	I1101 08:32:07.595351   21875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:07.595558   21875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:32:07.595859   21875 mustload.go:66] Loading cluster: addons-491859
	I1101 08:32:07.596375   21875 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:07.596395   21875 addons.go:607] checking whether the cluster is paused
	I1101 08:32:07.596529   21875 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:07.596558   21875 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:32:07.597108   21875 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:32:07.616624   21875 ssh_runner.go:195] Run: systemctl --version
	I1101 08:32:07.616677   21875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:32:07.635150   21875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:32:07.735812   21875 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:32:07.735920   21875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:32:07.766727   21875 cri.go:89] found id: "33e4e1fc1e33072f888a46aa17d3beb4e58f11877a9d27925e1e5d968eb6c903"
	I1101 08:32:07.766757   21875 cri.go:89] found id: "f04af0fd3a62dfc9f83c9abac1b7ccb6528a248e3fd3ee02ea1f2a7350778e83"
	I1101 08:32:07.766762   21875 cri.go:89] found id: "a1f3a49b7f394fcf06e3ec79eef028b31fb8f10e2d0269aa4eec27450086f2e9"
	I1101 08:32:07.766765   21875 cri.go:89] found id: "ff4bdf52bbb882d70c007d186f38568cb5286b9a2116e10107044414d1c422b0"
	I1101 08:32:07.766768   21875 cri.go:89] found id: "9d1050c081be96d28152bfd4e229378b4cc1d8c31d74f567fbc905b5e676cbe5"
	I1101 08:32:07.766770   21875 cri.go:89] found id: "2c81dda5dfe97017e1ea451a903bb723503013671dfd4ad2848dbd7ed4c00fda"
	I1101 08:32:07.766773   21875 cri.go:89] found id: "0f17b27c9fb94821e21590f954f59af583f7f28766b74bcf54fd77fd4403631f"
	I1101 08:32:07.766775   21875 cri.go:89] found id: "3070142e889654833dbabc836972d24ca0160e211e6a01dc410037b3d06aa377"
	I1101 08:32:07.766777   21875 cri.go:89] found id: "e0fe6aa919f9f7ec3e5dd5de78f0ba1c29746db4b58ff19fe034196dcb04a040"
	I1101 08:32:07.766788   21875 cri.go:89] found id: "dd32f839b496afac7e54669ede10e44b695513bd1f08cb2572d080421d76ed1f"
	I1101 08:32:07.766792   21875 cri.go:89] found id: "b8dc66998b8c65737a3fc68f94611d5a75e4841817858e50cf8f41fe3d0b9111"
	I1101 08:32:07.766794   21875 cri.go:89] found id: "c4c4e8392feed85ce6d8b52f77463bc2a8238dd093e730bd11ad824f180a3227"
	I1101 08:32:07.766797   21875 cri.go:89] found id: "a4c41d6f050f2ca6af53a5d7a6a54f2b04fb24731eca6d7272b14503b747f50d"
	I1101 08:32:07.766799   21875 cri.go:89] found id: "2b4413f8423a31353523e4d44f7675fac21836f4e3b491f3d3f19955b8251025"
	I1101 08:32:07.766801   21875 cri.go:89] found id: "73d495a359ef08303218d0bd2af8743a68b70af8ffdfadd49ac606f145b559b6"
	I1101 08:32:07.766808   21875 cri.go:89] found id: "18fc9837ab4ea8c07f85c79610c9eda88508e53a37801274e8022d17c69f1a98"
	I1101 08:32:07.766811   21875 cri.go:89] found id: "87757a0f68b4c84bb6b6a0830633d098f646e6d9b6aa521bccfaeb77b540635f"
	I1101 08:32:07.766819   21875 cri.go:89] found id: "f17c2b6b25fbc93674f08548b5309b9715e2cb6d7600ac5d6557505072b3fb3f"
	I1101 08:32:07.766822   21875 cri.go:89] found id: "c60507f296e9571f568638d8fac5f87c704242925cf7b4aa378db809c3e3176b"
	I1101 08:32:07.766824   21875 cri.go:89] found id: "4c1ad1a76dfd8c2d5f52abbb115ac02bc7871e82ffe8eed8b2bce0574ab65ce3"
	I1101 08:32:07.766827   21875 cri.go:89] found id: "808e84f4795d8f4c21c0316fc7e7437f547a0d653508d0419288257452ccf97b"
	I1101 08:32:07.766829   21875 cri.go:89] found id: "d4c72eaef44361ee96a4bd979c97bf60778beaa33b136b1407ada4c192a11240"
	I1101 08:32:07.766832   21875 cri.go:89] found id: "cdda903ada7547082c69c9584210b581ad9bfe62602052de013ddc3c59d043bc"
	I1101 08:32:07.766834   21875 cri.go:89] found id: "b29235edc538375f7d6a03229d64488db46c49997437218731a8cd64edc28444"
	I1101 08:32:07.766837   21875 cri.go:89] found id: ""
	I1101 08:32:07.766888   21875 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:32:07.782107   21875 out.go:203] 
	W1101 08:32:07.783724   21875 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:32:07.783743   21875 out.go:285] * 
	* 
	W1101 08:32:07.786768   21875 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:32:07.788431   21875 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-491859 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.26s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
I1101 08:31:59.095440    9414 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-6twrx" [6d120f25-a6a5-48f2-8849-25607b2e8338] Running
I1101 08:31:59.099247    9414 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1101 08:31:59.099272    9414 kapi.go:107] duration metric: took 3.845147ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003967196s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-491859 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-491859 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (253.358525ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:32:05.155667   21584 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:32:05.155992   21584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:05.156004   21584 out.go:374] Setting ErrFile to fd 2...
	I1101 08:32:05.156007   21584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:32:05.156188   21584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:32:05.156467   21584 mustload.go:66] Loading cluster: addons-491859
	I1101 08:32:05.156791   21584 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:05.156805   21584 addons.go:607] checking whether the cluster is paused
	I1101 08:32:05.156908   21584 config.go:182] Loaded profile config "addons-491859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:32:05.156921   21584 host.go:66] Checking if "addons-491859" exists ...
	I1101 08:32:05.157276   21584 cli_runner.go:164] Run: docker container inspect addons-491859 --format={{.State.Status}}
	I1101 08:32:05.176437   21584 ssh_runner.go:195] Run: systemctl --version
	I1101 08:32:05.176504   21584 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-491859
	I1101 08:32:05.195150   21584 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/addons-491859/id_rsa Username:docker}
	I1101 08:32:05.296169   21584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:32:05.296234   21584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:32:05.326491   21584 cri.go:89] found id: "33e4e1fc1e33072f888a46aa17d3beb4e58f11877a9d27925e1e5d968eb6c903"
	I1101 08:32:05.326511   21584 cri.go:89] found id: "f04af0fd3a62dfc9f83c9abac1b7ccb6528a248e3fd3ee02ea1f2a7350778e83"
	I1101 08:32:05.326515   21584 cri.go:89] found id: "a1f3a49b7f394fcf06e3ec79eef028b31fb8f10e2d0269aa4eec27450086f2e9"
	I1101 08:32:05.326524   21584 cri.go:89] found id: "ff4bdf52bbb882d70c007d186f38568cb5286b9a2116e10107044414d1c422b0"
	I1101 08:32:05.326538   21584 cri.go:89] found id: "9d1050c081be96d28152bfd4e229378b4cc1d8c31d74f567fbc905b5e676cbe5"
	I1101 08:32:05.326541   21584 cri.go:89] found id: "2c81dda5dfe97017e1ea451a903bb723503013671dfd4ad2848dbd7ed4c00fda"
	I1101 08:32:05.326544   21584 cri.go:89] found id: "0f17b27c9fb94821e21590f954f59af583f7f28766b74bcf54fd77fd4403631f"
	I1101 08:32:05.326546   21584 cri.go:89] found id: "3070142e889654833dbabc836972d24ca0160e211e6a01dc410037b3d06aa377"
	I1101 08:32:05.326549   21584 cri.go:89] found id: "e0fe6aa919f9f7ec3e5dd5de78f0ba1c29746db4b58ff19fe034196dcb04a040"
	I1101 08:32:05.326554   21584 cri.go:89] found id: "dd32f839b496afac7e54669ede10e44b695513bd1f08cb2572d080421d76ed1f"
	I1101 08:32:05.326557   21584 cri.go:89] found id: "b8dc66998b8c65737a3fc68f94611d5a75e4841817858e50cf8f41fe3d0b9111"
	I1101 08:32:05.326559   21584 cri.go:89] found id: "c4c4e8392feed85ce6d8b52f77463bc2a8238dd093e730bd11ad824f180a3227"
	I1101 08:32:05.326562   21584 cri.go:89] found id: "a4c41d6f050f2ca6af53a5d7a6a54f2b04fb24731eca6d7272b14503b747f50d"
	I1101 08:32:05.326564   21584 cri.go:89] found id: "2b4413f8423a31353523e4d44f7675fac21836f4e3b491f3d3f19955b8251025"
	I1101 08:32:05.326567   21584 cri.go:89] found id: "73d495a359ef08303218d0bd2af8743a68b70af8ffdfadd49ac606f145b559b6"
	I1101 08:32:05.326575   21584 cri.go:89] found id: "18fc9837ab4ea8c07f85c79610c9eda88508e53a37801274e8022d17c69f1a98"
	I1101 08:32:05.326580   21584 cri.go:89] found id: "87757a0f68b4c84bb6b6a0830633d098f646e6d9b6aa521bccfaeb77b540635f"
	I1101 08:32:05.326583   21584 cri.go:89] found id: "f17c2b6b25fbc93674f08548b5309b9715e2cb6d7600ac5d6557505072b3fb3f"
	I1101 08:32:05.326586   21584 cri.go:89] found id: "c60507f296e9571f568638d8fac5f87c704242925cf7b4aa378db809c3e3176b"
	I1101 08:32:05.326588   21584 cri.go:89] found id: "4c1ad1a76dfd8c2d5f52abbb115ac02bc7871e82ffe8eed8b2bce0574ab65ce3"
	I1101 08:32:05.326590   21584 cri.go:89] found id: "808e84f4795d8f4c21c0316fc7e7437f547a0d653508d0419288257452ccf97b"
	I1101 08:32:05.326593   21584 cri.go:89] found id: "d4c72eaef44361ee96a4bd979c97bf60778beaa33b136b1407ada4c192a11240"
	I1101 08:32:05.326595   21584 cri.go:89] found id: "cdda903ada7547082c69c9584210b581ad9bfe62602052de013ddc3c59d043bc"
	I1101 08:32:05.326597   21584 cri.go:89] found id: "b29235edc538375f7d6a03229d64488db46c49997437218731a8cd64edc28444"
	I1101 08:32:05.326600   21584 cri.go:89] found id: ""
	I1101 08:32:05.326645   21584 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:32:05.341998   21584 out.go:203] 
	W1101 08:32:05.343249   21584 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:32:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:32:05.343270   21584 out.go:285] * 
	* 
	W1101 08:32:05.346227   21584 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:32:05.347550   21584 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-491859 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-290156 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-290156 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-pg9sh" [b67a24cf-cf5a-46f6-b27a-3a8a130af70e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
I1101 08:38:14.109450    9414 retry.go:31] will retry after 3.066531066s: Temporary Error: Get "http://10.105.176.244": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-290156 -n functional-290156
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-01 08:47:47.09450417 +0000 UTC m=+1119.885496243
functional_test.go:1645: (dbg) Run:  kubectl --context functional-290156 describe po hello-node-connect-7d85dfc575-pg9sh -n default
functional_test.go:1645: (dbg) kubectl --context functional-290156 describe po hello-node-connect-7d85dfc575-pg9sh -n default:
Name:             hello-node-connect-7d85dfc575-pg9sh
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-290156/192.168.49.2
Start Time:       Sat, 01 Nov 2025 08:38:11 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kg9fb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kg9fb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m35s                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-pg9sh to functional-290156
Normal   Pulling    6m40s (x5 over 9m36s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m40s (x5 over 9m36s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m40s (x5 over 9m36s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m28s (x22 over 9m35s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m28s (x22 over 9m35s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-290156 logs hello-node-connect-7d85dfc575-pg9sh -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-290156 logs hello-node-connect-7d85dfc575-pg9sh -n default: exit status 1 (66.084108ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-pg9sh" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-290156 logs hello-node-connect-7d85dfc575-pg9sh -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-290156 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-pg9sh
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-290156/192.168.49.2
Start Time:       Sat, 01 Nov 2025 08:38:11 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kg9fb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kg9fb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m35s                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-pg9sh to functional-290156
Normal   Pulling    6m40s (x5 over 9m36s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m40s (x5 over 9m36s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m40s (x5 over 9m36s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m28s (x22 over 9m35s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m28s (x22 over 9m35s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-290156 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-290156 logs -l app=hello-node-connect: exit status 1 (68.215054ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-pg9sh" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-290156 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-290156 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.103.26.5
IPs:                      10.103.26.5
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32013/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-290156
helpers_test.go:243: (dbg) docker inspect functional-290156:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dbf898492be9075a5565a27694c417b9abb62ef5b1611894a4de3bffb3b9f18d",
	        "Created": "2025-11-01T08:35:48.088961378Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33657,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T08:35:48.122098019Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/dbf898492be9075a5565a27694c417b9abb62ef5b1611894a4de3bffb3b9f18d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dbf898492be9075a5565a27694c417b9abb62ef5b1611894a4de3bffb3b9f18d/hostname",
	        "HostsPath": "/var/lib/docker/containers/dbf898492be9075a5565a27694c417b9abb62ef5b1611894a4de3bffb3b9f18d/hosts",
	        "LogPath": "/var/lib/docker/containers/dbf898492be9075a5565a27694c417b9abb62ef5b1611894a4de3bffb3b9f18d/dbf898492be9075a5565a27694c417b9abb62ef5b1611894a4de3bffb3b9f18d-json.log",
	        "Name": "/functional-290156",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-290156:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-290156",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dbf898492be9075a5565a27694c417b9abb62ef5b1611894a4de3bffb3b9f18d",
	                "LowerDir": "/var/lib/docker/overlay2/e17ab8789f2671ac72f4980e2b64da127ba3b067f813f7586349a52158f7528a-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e17ab8789f2671ac72f4980e2b64da127ba3b067f813f7586349a52158f7528a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e17ab8789f2671ac72f4980e2b64da127ba3b067f813f7586349a52158f7528a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e17ab8789f2671ac72f4980e2b64da127ba3b067f813f7586349a52158f7528a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-290156",
	                "Source": "/var/lib/docker/volumes/functional-290156/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-290156",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-290156",
	                "name.minikube.sigs.k8s.io": "functional-290156",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bea3347dfe6cca682976c09014ec9b8902efb77a3d8c4a3576909f6082416f79",
	            "SandboxKey": "/var/run/docker/netns/bea3347dfe6c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-290156": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:94:8e:4a:1a:1e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9c48d2c29dea50971403a16a3ee6bab9080402e34dc1798aeb2a819db1dd30ee",
	                    "EndpointID": "4a4bce1e330ffb2e4a96e760ea56dcc2327a9a490b934b309899c84329e27c6d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-290156",
	                        "dbf898492be9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-290156 -n functional-290156
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-290156 logs -n 25: (1.420261471s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                   ARGS                                                    │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start          │ -p functional-290156 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio           │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │                     │
	│ start          │ -p functional-290156 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-290156 --alsologtostderr -v=1                                            │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │ 01 Nov 25 08:38 UTC │
	│ ssh            │ functional-290156 ssh sudo cat /etc/ssl/certs/9414.pem                                                    │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │ 01 Nov 25 08:38 UTC │
	│ ssh            │ functional-290156 ssh sudo cat /etc/test/nested/copy/9414/hosts                                           │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │ 01 Nov 25 08:38 UTC │
	│ ssh            │ functional-290156 ssh sudo cat /usr/share/ca-certificates/9414.pem                                        │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │ 01 Nov 25 08:38 UTC │
	│ ssh            │ functional-290156 ssh sudo cat /etc/ssl/certs/51391683.0                                                  │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │ 01 Nov 25 08:38 UTC │
	│ ssh            │ functional-290156 ssh sudo cat /etc/ssl/certs/94142.pem                                                   │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │ 01 Nov 25 08:38 UTC │
	│ ssh            │ functional-290156 ssh sudo cat /usr/share/ca-certificates/94142.pem                                       │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │ 01 Nov 25 08:38 UTC │
	│ ssh            │ functional-290156 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                  │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │ 01 Nov 25 08:38 UTC │
	│ image          │ functional-290156 image ls --format short --alsologtostderr                                               │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │ 01 Nov 25 08:38 UTC │
	│ image          │ functional-290156 image ls --format json --alsologtostderr                                                │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │ 01 Nov 25 08:38 UTC │
	│ image          │ functional-290156 image ls --format table --alsologtostderr                                               │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │ 01 Nov 25 08:38 UTC │
	│ image          │ functional-290156 image ls --format yaml --alsologtostderr                                                │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │ 01 Nov 25 08:38 UTC │
	│ ssh            │ functional-290156 ssh pgrep buildkitd                                                                     │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │                     │
	│ image          │ functional-290156 image build -t localhost/my-image:functional-290156 testdata/build --alsologtostderr    │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │ 01 Nov 25 08:38 UTC │
	│ image          │ functional-290156 image ls                                                                                │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │ 01 Nov 25 08:38 UTC │
	│ update-context │ functional-290156 update-context --alsologtostderr -v=2                                                   │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │ 01 Nov 25 08:38 UTC │
	│ update-context │ functional-290156 update-context --alsologtostderr -v=2                                                   │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │ 01 Nov 25 08:38 UTC │
	│ update-context │ functional-290156 update-context --alsologtostderr -v=2                                                   │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:38 UTC │ 01 Nov 25 08:38 UTC │
	│ service        │ functional-290156 service list                                                                            │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:47 UTC │ 01 Nov 25 08:47 UTC │
	│ service        │ functional-290156 service list -o json                                                                    │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:47 UTC │ 01 Nov 25 08:47 UTC │
	│ service        │ functional-290156 service --namespace=default --https --url hello-node                                    │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:47 UTC │                     │
	│ service        │ functional-290156 service hello-node --url --format={{.IP}}                                               │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:47 UTC │                     │
	│ service        │ functional-290156 service hello-node --url                                                                │ functional-290156 │ jenkins │ v1.37.0 │ 01 Nov 25 08:47 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:38:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:38:30.196589   47997 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:38:30.196708   47997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:38:30.196715   47997 out.go:374] Setting ErrFile to fd 2...
	I1101 08:38:30.196721   47997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:38:30.197081   47997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:38:30.197570   47997 out.go:368] Setting JSON to false
	I1101 08:38:30.198589   47997 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1258,"bootTime":1761985052,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:38:30.198696   47997 start.go:143] virtualization: kvm guest
	I1101 08:38:30.200833   47997 out.go:179] * [functional-290156] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1101 08:38:30.202265   47997 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 08:38:30.202292   47997 notify.go:221] Checking for updates...
	I1101 08:38:30.205355   47997 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:38:30.206687   47997 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 08:38:30.207882   47997 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 08:38:30.209331   47997 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 08:38:30.210964   47997 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:38:30.212843   47997 config.go:182] Loaded profile config "functional-290156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:38:30.213321   47997 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:38:30.237186   47997 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 08:38:30.237295   47997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:38:30.296309   47997 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-01 08:38:30.285327225 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:38:30.296430   47997 docker.go:319] overlay module found
	I1101 08:38:30.298205   47997 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1101 08:38:30.299643   47997 start.go:309] selected driver: docker
	I1101 08:38:30.299663   47997 start.go:930] validating driver "docker" against &{Name:functional-290156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-290156 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:38:30.299770   47997 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:38:30.301790   47997 out.go:203] 
	W1101 08:38:30.303043   47997 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 08:38:30.304396   47997 out.go:203] 
	
	
	==> CRI-O <==
	Nov 01 08:38:37 functional-290156 crio[3558]: time="2025-11-01T08:38:37.599048471Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=29c2c9bf-cd6e-4512-8f61-065a04f04e22 name=/runtime.v1.ImageService/PullImage
	Nov 01 08:38:37 functional-290156 crio[3558]: time="2025-11-01T08:38:37.600635073Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Nov 01 08:38:43 functional-290156 crio[3558]: time="2025-11-01T08:38:43.801425773Z" level=info msg="Pulled image: docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da" id=29c2c9bf-cd6e-4512-8f61-065a04f04e22 name=/runtime.v1.ImageService/PullImage
	Nov 01 08:38:43 functional-290156 crio[3558]: time="2025-11-01T08:38:43.80214043Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=d6338154-23ba-4f42-bc62-0ae5aaad8aed name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:38:43 functional-290156 crio[3558]: time="2025-11-01T08:38:43.804319562Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=b4252659-6d1e-4925-961c-c9579fa09bf7 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:38:43 functional-290156 crio[3558]: time="2025-11-01T08:38:43.809015471Z" level=info msg="Creating container: default/mysql-5bb876957f-twr9j/mysql" id=385caeec-de26-4a0f-8414-44621154b618 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 08:38:43 functional-290156 crio[3558]: time="2025-11-01T08:38:43.809154064Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:38:43 functional-290156 crio[3558]: time="2025-11-01T08:38:43.815001771Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:38:43 functional-290156 crio[3558]: time="2025-11-01T08:38:43.815612422Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:38:43 functional-290156 crio[3558]: time="2025-11-01T08:38:43.853469594Z" level=info msg="Created container d8075548aeebaa184c47f1384bca0344fd7f6e1d4a8d750b084765a20d3f4132: default/mysql-5bb876957f-twr9j/mysql" id=385caeec-de26-4a0f-8414-44621154b618 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 08:38:43 functional-290156 crio[3558]: time="2025-11-01T08:38:43.854213612Z" level=info msg="Starting container: d8075548aeebaa184c47f1384bca0344fd7f6e1d4a8d750b084765a20d3f4132" id=4762a102-3ae2-4bd7-ba5d-0aa1d53df561 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 08:38:43 functional-290156 crio[3558]: time="2025-11-01T08:38:43.856017212Z" level=info msg="Started container" PID=7450 containerID=d8075548aeebaa184c47f1384bca0344fd7f6e1d4a8d750b084765a20d3f4132 description=default/mysql-5bb876957f-twr9j/mysql id=4762a102-3ae2-4bd7-ba5d-0aa1d53df561 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4e9aa6f42a74327e244e0d79208e854f3180b5b14a3ba09d8f33381e37a15da
	Nov 01 08:38:49 functional-290156 crio[3558]: time="2025-11-01T08:38:49.962160931Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=30a37540-db72-4e7c-bf0b-688a1cdfd8b5 name=/runtime.v1.ImageService/PullImage
	Nov 01 08:38:51 functional-290156 crio[3558]: time="2025-11-01T08:38:51.962797649Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3f0d1a58-45fa-4856-bfab-f5fc653098f0 name=/runtime.v1.ImageService/PullImage
	Nov 01 08:38:56 functional-290156 crio[3558]: time="2025-11-01T08:38:56.95353582Z" level=info msg="Stopping pod sandbox: a4756d21b9df4fce41ee4ae92192406d4ebe3d650291367c9e2934656a8b04f1" id=94425cc4-4123-482f-9b89-2d590e0906fe name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 08:38:56 functional-290156 crio[3558]: time="2025-11-01T08:38:56.953588863Z" level=info msg="Stopped pod sandbox (already stopped): a4756d21b9df4fce41ee4ae92192406d4ebe3d650291367c9e2934656a8b04f1" id=94425cc4-4123-482f-9b89-2d590e0906fe name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 08:38:56 functional-290156 crio[3558]: time="2025-11-01T08:38:56.953962462Z" level=info msg="Removing pod sandbox: a4756d21b9df4fce41ee4ae92192406d4ebe3d650291367c9e2934656a8b04f1" id=43cbb17b-0649-41d1-8d28-3aff6843aec6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 08:38:56 functional-290156 crio[3558]: time="2025-11-01T08:38:56.957022539Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 08:38:56 functional-290156 crio[3558]: time="2025-11-01T08:38:56.957080085Z" level=info msg="Removed pod sandbox: a4756d21b9df4fce41ee4ae92192406d4ebe3d650291367c9e2934656a8b04f1" id=43cbb17b-0649-41d1-8d28-3aff6843aec6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 08:39:33 functional-290156 crio[3558]: time="2025-11-01T08:39:33.96185795Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=23565a34-0e62-4dad-ab20-d80cfe17fad1 name=/runtime.v1.ImageService/PullImage
	Nov 01 08:39:39 functional-290156 crio[3558]: time="2025-11-01T08:39:39.962368662Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5ec69c42-1e01-40f9-9d5c-1943266f4e71 name=/runtime.v1.ImageService/PullImage
	Nov 01 08:40:55 functional-290156 crio[3558]: time="2025-11-01T08:40:55.962472371Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5248d402-e24e-40a2-b67b-af32fecd4d87 name=/runtime.v1.ImageService/PullImage
	Nov 01 08:41:07 functional-290156 crio[3558]: time="2025-11-01T08:41:07.962759789Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f7463c66-80da-4008-9e94-d57a0c42ae51 name=/runtime.v1.ImageService/PullImage
	Nov 01 08:43:36 functional-290156 crio[3558]: time="2025-11-01T08:43:36.96238134Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b5724d04-47d6-4d08-9348-719eb3c0d566 name=/runtime.v1.ImageService/PullImage
	Nov 01 08:43:58 functional-290156 crio[3558]: time="2025-11-01T08:43:58.962463014Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7ff604ad-86fd-48e2-ab6c-109d8d96eb51 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d8075548aeeba       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   a4e9aa6f42a74       mysql-5bb876957f-twr9j                       default
	9a426d430e5ed       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   ddb4a7d92ad54       kubernetes-dashboard-855c9754f9-brbjp        kubernetes-dashboard
	e157cb650d8ed       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   2612738a6d887       dashboard-metrics-scraper-77bf4d6c4c-tzqq6   kubernetes-dashboard
	e04c82dbc419c       docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58                  9 minutes ago       Running             myfrontend                  0                   235162bb09a36       sp-pod                                       default
	dc8b45732638a       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   7bdb674e93e77       busybox-mount                                default
	3bc31239247d2       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   6bf8b20a378c1       nginx-svc                                    default
	744c34ad6da88       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              2                   b0ec0ed81402e       kube-apiserver-functional-290156             kube-system
	0e77098d21151       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   d4eb65dc297ae       storage-provisioner                          kube-system
	8d3e4a4df7b75       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Exited              kube-apiserver              1                   b0ec0ed81402e       kube-apiserver-functional-290156             kube-system
	fb164bd7d8648       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   99a27258d1f60       kube-scheduler-functional-290156             kube-system
	d5eba7bc37789       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   c9ed674faffa3       etcd-functional-290156                       kube-system
	39178496fc971       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     1                   f87faf9578084       kube-controller-manager-functional-290156    kube-system
	e4782e228e928       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   d4eb65dc297ae       storage-provisioner                          kube-system
	480bf9d794c12       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   9a4762c7f0b0d       kindnet-zzqlm                                kube-system
	8bd1c8e9c88db       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   8c20bf92b02ce       kube-proxy-bmj46                             kube-system
	3345aa1a96796       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   9661b7d852fba       coredns-66bc5c9577-qj8bl                     kube-system
	8c1ee1f086479       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   9661b7d852fba       coredns-66bc5c9577-qj8bl                     kube-system
	b2f1927668051       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   8c20bf92b02ce       kube-proxy-bmj46                             kube-system
	628a0aa516c5a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   9a4762c7f0b0d       kindnet-zzqlm                                kube-system
	43cade1b0a616       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 11 minutes ago      Exited              kube-controller-manager     0                   f87faf9578084       kube-controller-manager-functional-290156    kube-system
	ba0f0b4183331       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   c9ed674faffa3       etcd-functional-290156                       kube-system
	02dcdce13fd0e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   99a27258d1f60       kube-scheduler-functional-290156             kube-system
	
	
	==> coredns [3345aa1a967964f872ba1b68b10ff900ebaf4ee55a35b94dc8c399678d89e29c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33115 - 46038 "HINFO IN 3484300175021592708.3408854828446883403. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.104481115s
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [8c1ee1f0864792f75a1a506fce3ed4118400bb6bb12ce83258289a5d1de5cb0e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34414 - 27508 "HINFO IN 4501663624202795176.7205082785476589309. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.151300883s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-290156
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-290156
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=functional-290156
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T08_36_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 08:35:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-290156
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 08:47:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 08:45:06 +0000   Sat, 01 Nov 2025 08:35:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 08:45:06 +0000   Sat, 01 Nov 2025 08:35:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 08:45:06 +0000   Sat, 01 Nov 2025 08:35:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 08:45:06 +0000   Sat, 01 Nov 2025 08:36:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-290156
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                954aea26-0254-43eb-b9cd-48345573421f
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-jbb2v                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	  default                     hello-node-connect-7d85dfc575-pg9sh           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	  default                     mysql-5bb876957f-twr9j                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m11s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 coredns-66bc5c9577-qj8bl                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-290156                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-zzqlm                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-290156              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-290156     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-bmj46                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-290156              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-tzqq6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-brbjp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 11m    kube-proxy       
	  Normal  Starting                 9m44s  kube-proxy       
	  Normal  Starting                 11m    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m    kubelet          Node functional-290156 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m    kubelet          Node functional-290156 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m    kubelet          Node functional-290156 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m    node-controller  Node functional-290156 event: Registered Node functional-290156 in Controller
	  Normal  NodeReady                11m    kubelet          Node functional-290156 status is now: NodeReady
	  Normal  Starting                 10m    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m    kubelet          Node functional-290156 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m    kubelet          Node functional-290156 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m    kubelet          Node functional-290156 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m    node-controller  Node functional-290156 event: Registered Node functional-290156 in Controller
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.047726] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[Nov 1 08:38] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.215674] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.047939] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000023] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023915] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023867] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023880] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000033] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023884] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +4.031524] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +8.255123] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	
	
	==> etcd [ba0f0b41833319135bb249ac35c0956431e27ee5be70ede0e050e667acae6a75] <==
	{"level":"warn","ts":"2025-11-01T08:35:58.130484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:35:58.136430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:35:58.142459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:35:58.157004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:35:58.162666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:35:58.168782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:35:58.219065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34966","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T08:36:53.918132Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T08:36:53.918213Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-290156","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-01T08:36:53.918304Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T08:36:53.919880Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T08:36:53.919915Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T08:36:53.919980Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T08:36:53.920007Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-11-01T08:36:53.919987Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-11-01T08:36:53.920020Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T08:36:53.920023Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T08:36:53.920030Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-11-01T08:36:53.920033Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T08:36:53.920049Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-01T08:36:53.920055Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T08:36:53.922110Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-01T08:36:53.922171Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T08:36:53.922197Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-01T08:36:53.922217Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-290156","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [d5eba7bc377897fc43f397f70c0d3847ad83b205bab6b0223faca106a48c5c4e] <==
	{"level":"warn","ts":"2025-11-01T08:37:16.715431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.721896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.727853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.734083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.740311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.746459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.753251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.759775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.765876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.772299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.778616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.785477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.792298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.798611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.804982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.811149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.817476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.831493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.837881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.865819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.872431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:37:16.878853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47444","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T08:47:16.453290Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1140}
	{"level":"info","ts":"2025-11-01T08:47:16.473879Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1140,"took":"20.222275ms","hash":3749779668,"current-db-size-bytes":3411968,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1531904,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-01T08:47:16.473933Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3749779668,"revision":1140,"compact-revision":-1}
	
	
	==> kernel <==
	 08:47:48 up 30 min,  0 user,  load average: 0.10, 0.19, 0.25
	Linux functional-290156 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [480bf9d794c12d484dedb86f326664954a368793e0067f2b722078f9be84e772] <==
	I1101 08:45:44.475399       1 main.go:301] handling current node
	I1101 08:45:54.475794       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:45:54.475855       1 main.go:301] handling current node
	I1101 08:46:04.476563       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:46:04.476612       1 main.go:301] handling current node
	I1101 08:46:14.479684       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:46:14.479719       1 main.go:301] handling current node
	I1101 08:46:24.475662       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:46:24.475696       1 main.go:301] handling current node
	I1101 08:46:34.483080       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:46:34.483118       1 main.go:301] handling current node
	I1101 08:46:44.475323       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:46:44.475356       1 main.go:301] handling current node
	I1101 08:46:54.480494       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:46:54.480527       1 main.go:301] handling current node
	I1101 08:47:04.483889       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:47:04.483929       1 main.go:301] handling current node
	I1101 08:47:14.481398       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:47:14.481442       1 main.go:301] handling current node
	I1101 08:47:24.476081       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:47:24.476125       1 main.go:301] handling current node
	I1101 08:47:34.483382       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:47:34.483417       1 main.go:301] handling current node
	I1101 08:47:44.476517       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:47:44.476555       1 main.go:301] handling current node
	
	
	==> kindnet [628a0aa516c5a61a4d59a63e087644ee562c66d9040792ea40706ce1294f0eef] <==
	I1101 08:36:06.911558       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 08:36:06.955029       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1101 08:36:06.955251       1 main.go:148] setting mtu 1500 for CNI 
	I1101 08:36:06.955271       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 08:36:06.955298       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T08:36:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 08:36:07.306522       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 08:36:07.306551       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 08:36:07.306563       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 08:36:07.306830       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 08:36:07.606784       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 08:36:07.606816       1 metrics.go:72] Registering metrics
	I1101 08:36:07.606941       1 controller.go:711] "Syncing nftables rules"
	I1101 08:36:17.307945       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:36:17.308028       1 main.go:301] handling current node
	I1101 08:36:27.311153       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:36:27.311193       1 main.go:301] handling current node
	I1101 08:36:37.311044       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:36:37.311079       1 main.go:301] handling current node
	
	
	==> kube-apiserver [744c34ad6da88a94aec07d06f09d93497edcf0ddec3541d5e89bfc7f49e6b52e] <==
	I1101 08:37:17.407025       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 08:37:17.422082       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 08:37:17.826964       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 08:37:17.835432       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 08:37:17.835890       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 08:37:17.839927       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 08:37:18.286691       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1101 08:37:18.498713       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1101 08:37:37.232026       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.77.83"}
	I1101 08:37:41.263188       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 08:37:41.349212       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.236.209"}
	I1101 08:37:42.634249       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.105.176.244"}
	I1101 08:37:46.744987       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.103.26.5"}
	E1101 08:38:27.918084       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38106: use of closed network connection
	I1101 08:38:31.169659       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 08:38:31.220551       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 08:38:31.230921       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 08:38:31.285711       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.11.228"}
	I1101 08:38:31.297644       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.245.245"}
	E1101 08:38:36.787655       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39496: use of closed network connection
	I1101 08:38:37.221525       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.47.74"}
	E1101 08:38:50.351745       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58552: use of closed network connection
	E1101 08:38:51.031943       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58562: use of closed network connection
	E1101 08:38:52.976563       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:58574: use of closed network connection
	I1101 08:47:17.317438       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-apiserver [8d3e4a4df7b757b94021df44811bfd9a3f8811b34603c7109823b78cc2f20bf1] <==
	I1101 08:36:58.104482       1 options.go:263] external host was not specified, using 192.168.49.2
	I1101 08:36:58.106794       1 server.go:150] Version: v1.34.1
	I1101 08:36:58.106827       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1101 08:36:58.107135       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-controller-manager [39178496fc971311f6a0508e7843092dce683597dd82e16b862f226bee2a15c2] <==
	W1101 08:37:14.639337       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/replicaset-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	W1101 08:37:14.674128       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/endpointslice-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	E1101 08:37:17.318136       1 reflector.go:205] "Failed to watch" err="roles.rbac.authorization.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"roles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Role"
	E1101 08:37:17.318158       1 reflector.go:205] "Failed to watch" err="controllerrevisions.apps is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"controllerrevisions\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ControllerRevision"
	E1101 08:37:17.318216       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	W1101 08:37:17.318240       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "endpoint-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	E1101 08:37:17.318277       1 reflector.go:205] "Failed to watch" err="validatingadmissionpolicybindings.admissionregistration.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"validatingadmissionpolicybindings\" in API group \"admissionregistration.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ValidatingAdmissionPolicyBinding"
	E1101 08:37:17.318324       1 reflector.go:205] "Failed to watch" err="runtimeclasses.node.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	W1101 08:37:17.318383       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "endpointslice-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	I1101 08:37:17.318470       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-kjtxl EndpointSlice for Service kube-system/kube-dns: Put \"https://192.168.49.2:8441/apis/discovery.k8s.io/v1/namespaces/kube-system/endpointslices/kube-dns-kjtxl\": failed to get token for kube-system/endpointslice-controller: timed out waiting for the condition"
	E1101 08:37:17.318764       1 reflector.go:205] "Failed to watch" err="configmaps is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"configmaps\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ConfigMap"
	W1101 08:37:17.318830       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "replicaset-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	E1101 08:37:17.319038       1 reflector.go:205] "Failed to watch" err="secrets is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"secrets\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Secret"
	E1101 08:37:17.319040       1 reflector.go:205] "Failed to watch" err="serviceaccounts is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"serviceaccounts\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceAccount"
	I1101 08:37:17.319070       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"bc124352-4425-44e6-95d9-80dcbbf02008", APIVersion:"v1", ResourceVersion:"250", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-kjtxl EndpointSlice for Service kube-system/kube-dns: Put "https://192.168.49.2:8441/apis/discovery.k8s.io/v1/namespaces/kube-system/endpointslices/kube-dns-kjtxl": failed to get token for kube-system/endpointslice-controller: timed out waiting for the condition
	E1101 08:37:17.319246       1 reflector.go:205] "Failed to watch" err="clusterrolebindings.rbac.authorization.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ClusterRoleBinding"
	W1101 08:37:17.321525       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "endpoint-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W1101 08:37:17.323695       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "endpointslice-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-controller-manager" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W1101 08:37:17.323737       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "replicaset-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-controller-manager" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1101 08:38:31.221564       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 08:38:31.225647       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 08:38:31.229303       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 08:38:31.230035       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 08:38:31.232479       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 08:38:31.238461       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [43cade1b0a616a254ab69677350d4bc37fc6873a71afcd8233201c8d69ca768b] <==
	I1101 08:36:05.588733       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 08:36:05.588822       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 08:36:05.588908       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 08:36:05.589138       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 08:36:05.589228       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 08:36:05.589406       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 08:36:05.589566       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 08:36:05.589641       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 08:36:05.589678       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 08:36:05.589744       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 08:36:05.590638       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 08:36:05.590693       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 08:36:05.592483       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 08:36:05.593200       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 08:36:05.594417       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 08:36:05.594440       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 08:36:05.594478       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 08:36:05.594501       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 08:36:05.594509       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 08:36:05.594513       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 08:36:05.599711       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 08:36:05.600650       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-290156" podCIDRs=["10.244.0.0/24"]
	I1101 08:36:05.610952       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 08:36:05.616467       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 08:36:20.545275       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8bd1c8e9c88db77d386fd5bb950d33af1ed5e0988e094d2b300e175e3dd4c93b] <==
	E1101 08:36:45.098440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-290156&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 08:36:46.775427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-290156&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 08:36:50.495730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-290156&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 08:37:01.790672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-290156&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 08:37:15.302801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-290156&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1101 08:38:04.494737       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 08:38:04.494778       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 08:38:04.494853       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 08:38:04.515071       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 08:38:04.515137       1 server_linux.go:132] "Using iptables Proxier"
	I1101 08:38:04.521090       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 08:38:04.521521       1 server.go:527] "Version info" version="v1.34.1"
	I1101 08:38:04.521553       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:38:04.524457       1 config.go:200] "Starting service config controller"
	I1101 08:38:04.524479       1 config.go:106] "Starting endpoint slice config controller"
	I1101 08:38:04.524487       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 08:38:04.524494       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 08:38:04.524552       1 config.go:309] "Starting node config controller"
	I1101 08:38:04.524561       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 08:38:04.524569       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 08:38:04.524562       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 08:38:04.524586       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 08:38:04.624738       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 08:38:04.624773       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 08:38:04.624793       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [b2f1927668051a51df8fd44014799506dae15af1ec64d92d92720c9f0382f296] <==
	I1101 08:36:06.794559       1 server_linux.go:53] "Using iptables proxy"
	I1101 08:36:06.875962       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 08:36:06.976915       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 08:36:06.976958       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 08:36:06.977054       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 08:36:06.998037       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 08:36:06.998100       1 server_linux.go:132] "Using iptables Proxier"
	I1101 08:36:07.004151       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 08:36:07.004615       1 server.go:527] "Version info" version="v1.34.1"
	I1101 08:36:07.004656       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:36:07.006235       1 config.go:200] "Starting service config controller"
	I1101 08:36:07.006257       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 08:36:07.006294       1 config.go:106] "Starting endpoint slice config controller"
	I1101 08:36:07.006320       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 08:36:07.006387       1 config.go:309] "Starting node config controller"
	I1101 08:36:07.006395       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 08:36:07.006401       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 08:36:07.006608       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 08:36:07.006650       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 08:36:07.107170       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 08:36:07.107214       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 08:36:07.107211       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [02dcdce13fd0e721b87b4b26559a9ffc627128a1dfeb0074fe7c9a067d7dd880] <==
	E1101 08:35:58.611764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:35:58.612015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 08:35:58.612106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 08:35:58.612350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 08:35:58.612395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 08:35:58.612472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 08:35:58.613101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 08:35:58.613206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 08:35:58.613236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:35:58.613275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:35:58.613306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:35:58.613335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 08:35:59.429249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:35:59.464569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 08:35:59.553403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 08:35:59.645275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:35:59.774698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:35:59.787777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1101 08:36:02.109578       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 08:36:54.027606       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 08:36:54.027641       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 08:36:54.027700       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 08:36:54.027722       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 08:36:54.027739       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 08:36:54.027764       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fb164bd7d86480962f3a171072e39c3ff2a21b077b7eec5009fd83d3f26f0afd] <==
	I1101 08:36:57.742580       1 serving.go:386] Generated self-signed cert in-memory
	I1101 08:36:58.280360       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 08:36:58.280385       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:36:58.285700       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 08:36:58.285717       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 08:36:58.285744       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 08:36:58.285747       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 08:36:58.285799       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 08:36:58.285810       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 08:36:58.286244       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 08:36:58.286312       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 08:36:58.386919       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 08:36:58.386920       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 08:36:58.387147       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1101 08:37:17.297484       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 08:37:17.324831       1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	
	
	==> kubelet <==
	Nov 01 08:45:09 functional-290156 kubelet[4144]: E1101 08:45:09.961482    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pg9sh" podUID="b67a24cf-cf5a-46f6-b27a-3a8a130af70e"
	Nov 01 08:45:23 functional-290156 kubelet[4144]: E1101 08:45:23.961513    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pg9sh" podUID="b67a24cf-cf5a-46f6-b27a-3a8a130af70e"
	Nov 01 08:45:23 functional-290156 kubelet[4144]: E1101 08:45:23.961722    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jbb2v" podUID="7e5e84ac-643e-4dd7-a842-661be04a4048"
	Nov 01 08:45:34 functional-290156 kubelet[4144]: E1101 08:45:34.961575    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pg9sh" podUID="b67a24cf-cf5a-46f6-b27a-3a8a130af70e"
	Nov 01 08:45:35 functional-290156 kubelet[4144]: E1101 08:45:35.962319    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jbb2v" podUID="7e5e84ac-643e-4dd7-a842-661be04a4048"
	Nov 01 08:45:47 functional-290156 kubelet[4144]: E1101 08:45:47.961856    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pg9sh" podUID="b67a24cf-cf5a-46f6-b27a-3a8a130af70e"
	Nov 01 08:45:50 functional-290156 kubelet[4144]: E1101 08:45:50.961832    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jbb2v" podUID="7e5e84ac-643e-4dd7-a842-661be04a4048"
	Nov 01 08:46:01 functional-290156 kubelet[4144]: E1101 08:46:01.961437    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pg9sh" podUID="b67a24cf-cf5a-46f6-b27a-3a8a130af70e"
	Nov 01 08:46:05 functional-290156 kubelet[4144]: E1101 08:46:05.961551    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jbb2v" podUID="7e5e84ac-643e-4dd7-a842-661be04a4048"
	Nov 01 08:46:16 functional-290156 kubelet[4144]: E1101 08:46:16.962574    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pg9sh" podUID="b67a24cf-cf5a-46f6-b27a-3a8a130af70e"
	Nov 01 08:46:17 functional-290156 kubelet[4144]: E1101 08:46:17.961319    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jbb2v" podUID="7e5e84ac-643e-4dd7-a842-661be04a4048"
	Nov 01 08:46:29 functional-290156 kubelet[4144]: E1101 08:46:29.961677    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jbb2v" podUID="7e5e84ac-643e-4dd7-a842-661be04a4048"
	Nov 01 08:46:31 functional-290156 kubelet[4144]: E1101 08:46:31.961694    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pg9sh" podUID="b67a24cf-cf5a-46f6-b27a-3a8a130af70e"
	Nov 01 08:46:42 functional-290156 kubelet[4144]: E1101 08:46:42.963670    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pg9sh" podUID="b67a24cf-cf5a-46f6-b27a-3a8a130af70e"
	Nov 01 08:46:43 functional-290156 kubelet[4144]: E1101 08:46:43.962231    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jbb2v" podUID="7e5e84ac-643e-4dd7-a842-661be04a4048"
	Nov 01 08:46:54 functional-290156 kubelet[4144]: E1101 08:46:54.961449    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jbb2v" podUID="7e5e84ac-643e-4dd7-a842-661be04a4048"
	Nov 01 08:46:55 functional-290156 kubelet[4144]: E1101 08:46:55.961444    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pg9sh" podUID="b67a24cf-cf5a-46f6-b27a-3a8a130af70e"
	Nov 01 08:47:07 functional-290156 kubelet[4144]: E1101 08:47:07.961818    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jbb2v" podUID="7e5e84ac-643e-4dd7-a842-661be04a4048"
	Nov 01 08:47:10 functional-290156 kubelet[4144]: E1101 08:47:10.961742    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pg9sh" podUID="b67a24cf-cf5a-46f6-b27a-3a8a130af70e"
	Nov 01 08:47:22 functional-290156 kubelet[4144]: E1101 08:47:22.961923    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pg9sh" podUID="b67a24cf-cf5a-46f6-b27a-3a8a130af70e"
	Nov 01 08:47:22 functional-290156 kubelet[4144]: E1101 08:47:22.961955    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jbb2v" podUID="7e5e84ac-643e-4dd7-a842-661be04a4048"
	Nov 01 08:47:34 functional-290156 kubelet[4144]: E1101 08:47:34.962115    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jbb2v" podUID="7e5e84ac-643e-4dd7-a842-661be04a4048"
	Nov 01 08:47:34 functional-290156 kubelet[4144]: E1101 08:47:34.962246    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pg9sh" podUID="b67a24cf-cf5a-46f6-b27a-3a8a130af70e"
	Nov 01 08:47:45 functional-290156 kubelet[4144]: E1101 08:47:45.961536    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-jbb2v" podUID="7e5e84ac-643e-4dd7-a842-661be04a4048"
	Nov 01 08:47:46 functional-290156 kubelet[4144]: E1101 08:47:46.962708    4144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-pg9sh" podUID="b67a24cf-cf5a-46f6-b27a-3a8a130af70e"
	
	
	==> kubernetes-dashboard [9a426d430e5edb07b79eba26b83d6a79fdfccf22ac1c122b90e4a49b4315743a] <==
	2025/11/01 08:38:36 Starting overwatch
	2025/11/01 08:38:36 Using namespace: kubernetes-dashboard
	2025/11/01 08:38:36 Using in-cluster config to connect to apiserver
	2025/11/01 08:38:36 Using secret token for csrf signing
	2025/11/01 08:38:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 08:38:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 08:38:36 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 08:38:36 Generating JWE encryption key
	2025/11/01 08:38:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 08:38:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 08:38:36 Initializing JWE encryption key from synchronized object
	2025/11/01 08:38:36 Creating in-cluster Sidecar client
	2025/11/01 08:38:36 Successful request to sidecar
	2025/11/01 08:38:36 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [0e77098d21151735e5124ea2a830e8218914932fcef7a9ca844a6282acf562fa] <==
	W1101 08:47:23.884475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:25.887319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:25.891240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:27.894472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:27.899420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:29.902074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:29.905981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:31.908995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:31.916063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:33.919400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:33.923463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:35.926291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:35.930079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:37.933682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:37.938749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:39.941845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:39.945469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:41.949349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:41.954955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:43.958014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:43.961963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:45.964707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:45.969952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:47.973387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:47:47.978001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e4782e228e928767642e9c3f7b578715eab10809a6c9c6e8a82fa02df0a459cf] <==
	I1101 08:36:44.095765       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 08:36:44.097294       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-290156 -n functional-290156
helpers_test.go:269: (dbg) Run:  kubectl --context functional-290156 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-jbb2v hello-node-connect-7d85dfc575-pg9sh
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-290156 describe pod busybox-mount hello-node-75c85bcc94-jbb2v hello-node-connect-7d85dfc575-pg9sh
helpers_test.go:290: (dbg) kubectl --context functional-290156 describe pod busybox-mount hello-node-75c85bcc94-jbb2v hello-node-connect-7d85dfc575-pg9sh:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-290156/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 08:38:21 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  cri-o://dc8b45732638aec9f126a903648c73e62c5572cb934eaae26b767e14aca45b9e
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 01 Nov 2025 08:38:22 +0000
	      Finished:     Sat, 01 Nov 2025 08:38:22 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z766d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-z766d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m28s  default-scheduler  Successfully assigned default/busybox-mount to functional-290156
	  Normal  Pulling    9m28s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m27s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 731ms (744ms including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m27s  kubelet            Created container: mount-munger
	  Normal  Started    9m27s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-jbb2v
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-290156/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 08:38:11 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dzvzp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dzvzp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m38s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-jbb2v to functional-290156
	  Warning  Failed     6m54s (x5 over 9m38s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m54s (x5 over 9m38s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m27s (x20 over 9m37s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m27s (x20 over 9m37s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    4m13s (x6 over 9m38s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-pg9sh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-290156/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 08:38:11 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kg9fb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kg9fb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m38s                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-pg9sh to functional-290156
	  Normal   Pulling    6m42s (x5 over 9m38s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m42s (x5 over 9m38s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m42s (x5 over 9m38s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m30s (x22 over 9m37s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m30s (x22 over 9m37s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 image ls --format yaml --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-290156 image ls --format yaml --alsologtostderr: (2.288880818s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-290156 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-290156 image ls --format yaml --alsologtostderr:
I1101 08:38:39.472501   49434 out.go:360] Setting OutFile to fd 1 ...
I1101 08:38:39.472851   49434 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:38:39.472881   49434 out.go:374] Setting ErrFile to fd 2...
I1101 08:38:39.472888   49434 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:38:39.473192   49434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
I1101 08:38:39.474055   49434 config.go:182] Loaded profile config "functional-290156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:38:39.474189   49434 config.go:182] Loaded profile config "functional-290156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:38:39.474719   49434 cli_runner.go:164] Run: docker container inspect functional-290156 --format={{.State.Status}}
I1101 08:38:39.498957   49434 ssh_runner.go:195] Run: systemctl --version
I1101 08:38:39.499024   49434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-290156
I1101 08:38:39.524139   49434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/functional-290156/id_rsa Username:docker}
I1101 08:38:39.635822   49434 ssh_runner.go:195] Run: sudo crictl images --output json
I1101 08:38:41.669468   49434 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.033607519s)
W1101 08:38:41.669550   49434 cache_images.go:736] Failed to list images for profile functional-290156 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1101 08:38:41.666532    7235 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="image:{}"
time="2025-11-01T08:38:41Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL"
functional_test.go:290: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-290156 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-290156 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-jbb2v" [7e5e84ac-643e-4dd7-a842-661be04a4048] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-290156 -n functional-290156
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-01 08:47:41.69108738 +0000 UTC m=+1114.482079443
functional_test.go:1460: (dbg) Run:  kubectl --context functional-290156 describe po hello-node-75c85bcc94-jbb2v -n default
functional_test.go:1460: (dbg) kubectl --context functional-290156 describe po hello-node-75c85bcc94-jbb2v -n default:
Name:             hello-node-75c85bcc94-jbb2v
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-290156/192.168.49.2
Start Time:       Sat, 01 Nov 2025 08:38:11 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dzvzp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dzvzp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m30s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-jbb2v to functional-290156
Warning  Failed     6m46s (x5 over 9m30s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m46s (x5 over 9m30s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m19s (x20 over 9m29s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m19s (x20 over 9m29s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    4m5s (x6 over 9m30s)    kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-290156 logs hello-node-75c85bcc94-jbb2v -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-290156 logs hello-node-75c85bcc94-jbb2v -n default: exit status 1 (67.367582ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-jbb2v" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-290156 logs hello-node-75c85bcc94-jbb2v -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 image load --daemon kicbase/echo-server:functional-290156 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-290156" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 image load --daemon kicbase/echo-server:functional-290156 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-290156" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-290156
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 image load --daemon kicbase/echo-server:functional-290156 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-290156" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 image save kicbase/echo-server:functional-290156 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1101 08:37:45.979219   43784 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:37:45.979555   43784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:37:45.979566   43784 out.go:374] Setting ErrFile to fd 2...
	I1101 08:37:45.979571   43784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:37:45.979787   43784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:37:45.980419   43784 config.go:182] Loaded profile config "functional-290156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:37:45.980515   43784 config.go:182] Loaded profile config "functional-290156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:37:45.980925   43784 cli_runner.go:164] Run: docker container inspect functional-290156 --format={{.State.Status}}
	I1101 08:37:45.999460   43784 ssh_runner.go:195] Run: systemctl --version
	I1101 08:37:45.999506   43784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-290156
	I1101 08:37:46.017809   43784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/functional-290156/id_rsa Username:docker}
	I1101 08:37:46.116730   43784 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1101 08:37:46.116790   43784 cache_images.go:255] Failed to load cached images for "functional-290156": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1101 08:37:46.116816   43784 cache_images.go:267] failed pushing to: functional-290156

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-290156
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 image save --daemon kicbase/echo-server:functional-290156 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-290156
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-290156: exit status 1 (16.870973ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-290156

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-290156

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290156 service --namespace=default --https --url hello-node: exit status 115 (534.732863ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30951
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-290156 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290156 service hello-node --url --format={{.IP}}: exit status 115 (538.504598ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-290156 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290156 service hello-node --url: exit status 115 (538.964138ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30951
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-290156 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30951
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.94s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-730531 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-730531 --output=json --user=testUser: exit status 80 (1.940766885s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e1cbf3b8-6a56-4873-b5ce-1536812830f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-730531 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"0e0e631b-56aa-4706-8576-ab537e987b64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-01T08:57:43Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"476a83b5-5d22-4e0f-bc88-298739477dec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-730531 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.94s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.96s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-730531 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-730531 --output=json --user=testUser: exit status 80 (1.95620614s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1e33363f-b481-4ef1-9606-5868f3a7bf26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-730531 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"6200e142-ffc1-43bf-af5c-63710c4624da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-01T08:57:45Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"b322c9c8-be8c-4884-ae2f-152087643b4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-730531 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.96s)

                                                
                                    
x
+
TestPreload (425.87s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-398259 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1101 09:06:43.471753    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-398259 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (45.781774584s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-398259 image pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-398259
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-398259: (5.911054726s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-398259 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1101 09:07:41.354791    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:09:04.427626    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:09:46.543297    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:11:43.471972    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:12:41.355279    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p test-preload-398259 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: exit status 80 (6m9.450957457s)

                                                
                                                
-- stdout --
	* [test-preload-398259] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	* Using the docker driver based on existing profile
	* Starting "test-preload-398259" primary control-plane node in "test-preload-398259" cluster
	* Pulling base image v0.0.48-1760939008-21773 ...
	* Downloading Kubernetes v1.32.0 preload ...
	* Preparing Kubernetes v1.32.0 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:07:22.619970  169856 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:07:22.620138  169856 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:07:22.620150  169856 out.go:374] Setting ErrFile to fd 2...
	I1101 09:07:22.620158  169856 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:07:22.620361  169856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:07:22.620881  169856 out.go:368] Setting JSON to false
	I1101 09:07:22.621795  169856 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2991,"bootTime":1761985052,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:07:22.621914  169856 start.go:143] virtualization: kvm guest
	I1101 09:07:22.624167  169856 out.go:179] * [test-preload-398259] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:07:22.625767  169856 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:07:22.625766  169856 notify.go:221] Checking for updates...
	I1101 09:07:22.628595  169856 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:07:22.629972  169856 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:07:22.631423  169856 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:07:22.632827  169856 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:07:22.633890  169856 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:07:22.635434  169856 config.go:182] Loaded profile config "test-preload-398259": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 09:07:22.637442  169856 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1101 09:07:22.638714  169856 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:07:22.663387  169856 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:07:22.663550  169856 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:07:22.725211  169856 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-01 09:07:22.714668262 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:07:22.725357  169856 docker.go:319] overlay module found
	I1101 09:07:22.727176  169856 out.go:179] * Using the docker driver based on existing profile
	I1101 09:07:22.728596  169856 start.go:309] selected driver: docker
	I1101 09:07:22.728615  169856 start.go:930] validating driver "docker" against &{Name:test-preload-398259 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-398259 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:07:22.728734  169856 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:07:22.729428  169856 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:07:22.787986  169856 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-01 09:07:22.777591895 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:07:22.788268  169856 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:07:22.788300  169856 cni.go:84] Creating CNI manager for ""
	I1101 09:07:22.788351  169856 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:07:22.788392  169856 start.go:353] cluster config:
	{Name:test-preload-398259 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-398259 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:07:22.790265  169856 out.go:179] * Starting "test-preload-398259" primary control-plane node in "test-preload-398259" cluster
	I1101 09:07:22.791648  169856 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:07:22.793272  169856 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:07:22.794598  169856 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 09:07:22.794793  169856 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:07:22.817320  169856 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:07:22.817345  169856 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:07:22.820697  169856 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1101 09:07:22.820718  169856 cache.go:59] Caching tarball of preloaded images
	I1101 09:07:22.820918  169856 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 09:07:22.822746  169856 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1101 09:07:22.824027  169856 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1101 09:07:22.856315  169856 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1101 09:07:22.856359  169856 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1101 09:07:25.183318  169856 cache.go:62] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1101 09:07:25.183483  169856 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/config.json ...
	I1101 09:07:25.183724  169856 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:07:25.183759  169856 start.go:360] acquireMachinesLock for test-preload-398259: {Name:mkc366bf054d0d01534a44955f2a762b7ac566a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:07:25.183822  169856 start.go:364] duration metric: took 43.74µs to acquireMachinesLock for "test-preload-398259"
	I1101 09:07:25.183837  169856 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:07:25.183842  169856 fix.go:54] fixHost starting: 
	I1101 09:07:25.184096  169856 cli_runner.go:164] Run: docker container inspect test-preload-398259 --format={{.State.Status}}
	I1101 09:07:25.202269  169856 fix.go:112] recreateIfNeeded on test-preload-398259: state=Stopped err=<nil>
	W1101 09:07:25.202307  169856 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:07:25.205156  169856 out.go:252] * Restarting existing docker container for "test-preload-398259" ...
	I1101 09:07:25.205230  169856 cli_runner.go:164] Run: docker start test-preload-398259
	I1101 09:07:25.456435  169856 cli_runner.go:164] Run: docker container inspect test-preload-398259 --format={{.State.Status}}
	I1101 09:07:25.475964  169856 kic.go:430] container "test-preload-398259" state is running.
	I1101 09:07:25.476443  169856 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-398259
	I1101 09:07:25.495920  169856 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/config.json ...
	I1101 09:07:25.496200  169856 machine.go:94] provisionDockerMachine start ...
	I1101 09:07:25.496285  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:25.516522  169856 main.go:143] libmachine: Using SSH client type: native
	I1101 09:07:25.516770  169856 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1101 09:07:25.516799  169856 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:07:25.517479  169856 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33446->127.0.0.1:32958: read: connection reset by peer
	I1101 09:07:28.661402  169856 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-398259
	
	I1101 09:07:28.661441  169856 ubuntu.go:182] provisioning hostname "test-preload-398259"
	I1101 09:07:28.661505  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:28.681422  169856 main.go:143] libmachine: Using SSH client type: native
	I1101 09:07:28.681689  169856 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1101 09:07:28.681705  169856 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-398259 && echo "test-preload-398259" | sudo tee /etc/hostname
	I1101 09:07:28.834185  169856 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-398259
	
	I1101 09:07:28.834286  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:28.853650  169856 main.go:143] libmachine: Using SSH client type: native
	I1101 09:07:28.853907  169856 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1101 09:07:28.853927  169856 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-398259' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-398259/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-398259' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:07:28.995899  169856 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:07:28.995926  169856 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5913/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5913/.minikube}
	I1101 09:07:28.995950  169856 ubuntu.go:190] setting up certificates
	I1101 09:07:28.995963  169856 provision.go:84] configureAuth start
	I1101 09:07:28.996026  169856 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-398259
	I1101 09:07:29.015430  169856 provision.go:143] copyHostCerts
	I1101 09:07:29.015506  169856 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem, removing ...
	I1101 09:07:29.015531  169856 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem
	I1101 09:07:29.015624  169856 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem (1123 bytes)
	I1101 09:07:29.015784  169856 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem, removing ...
	I1101 09:07:29.015802  169856 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem
	I1101 09:07:29.015839  169856 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem (1675 bytes)
	I1101 09:07:29.015953  169856 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem, removing ...
	I1101 09:07:29.015965  169856 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem
	I1101 09:07:29.016012  169856 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem (1078 bytes)
	I1101 09:07:29.016091  169856 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem org=jenkins.test-preload-398259 san=[127.0.0.1 192.168.76.2 localhost minikube test-preload-398259]
	I1101 09:07:29.049235  169856 provision.go:177] copyRemoteCerts
	I1101 09:07:29.049295  169856 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:07:29.049329  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:29.068681  169856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/test-preload-398259/id_rsa Username:docker}
	I1101 09:07:29.169520  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:07:29.188542  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 09:07:29.207336  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:07:29.225975  169856 provision.go:87] duration metric: took 229.992409ms to configureAuth
	I1101 09:07:29.226011  169856 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:07:29.226207  169856 config.go:182] Loaded profile config "test-preload-398259": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 09:07:29.226324  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:29.245034  169856 main.go:143] libmachine: Using SSH client type: native
	I1101 09:07:29.245260  169856 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1101 09:07:29.245283  169856 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:07:29.524358  169856 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:07:29.524383  169856 machine.go:97] duration metric: took 4.028165594s to provisionDockerMachine
	I1101 09:07:29.524396  169856 start.go:293] postStartSetup for "test-preload-398259" (driver="docker")
	I1101 09:07:29.524410  169856 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:07:29.524480  169856 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:07:29.524530  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:29.543471  169856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/test-preload-398259/id_rsa Username:docker}
	I1101 09:07:29.645763  169856 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:07:29.649723  169856 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:07:29.649761  169856 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:07:29.649784  169856 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/addons for local assets ...
	I1101 09:07:29.649836  169856 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/files for local assets ...
	I1101 09:07:29.649937  169856 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem -> 94142.pem in /etc/ssl/certs
	I1101 09:07:29.650042  169856 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:07:29.658329  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:07:29.677303  169856 start.go:296] duration metric: took 152.888972ms for postStartSetup
	I1101 09:07:29.677395  169856 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:07:29.677432  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:29.696350  169856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/test-preload-398259/id_rsa Username:docker}
	I1101 09:07:29.795418  169856 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:07:29.800303  169856 fix.go:56] duration metric: took 4.616452628s for fixHost
	I1101 09:07:29.800335  169856 start.go:83] releasing machines lock for "test-preload-398259", held for 4.616501942s
	I1101 09:07:29.800498  169856 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-398259
	I1101 09:07:29.819737  169856 ssh_runner.go:195] Run: cat /version.json
	I1101 09:07:29.819802  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:29.819806  169856 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:07:29.819873  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:29.840121  169856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/test-preload-398259/id_rsa Username:docker}
	I1101 09:07:29.841298  169856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/test-preload-398259/id_rsa Username:docker}
	I1101 09:07:29.938702  169856 ssh_runner.go:195] Run: systemctl --version
	I1101 09:07:29.993349  169856 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:07:30.030777  169856 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:07:30.035651  169856 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:07:30.035723  169856 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:07:30.044111  169856 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:07:30.044135  169856 start.go:496] detecting cgroup driver to use...
	I1101 09:07:30.044168  169856 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:07:30.044227  169856 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:07:30.060037  169856 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:07:30.073218  169856 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:07:30.073278  169856 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:07:30.088126  169856 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:07:30.101302  169856 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:07:30.178781  169856 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:07:30.257124  169856 docker.go:234] disabling docker service ...
	I1101 09:07:30.257198  169856 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:07:30.271878  169856 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:07:30.284778  169856 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:07:30.364102  169856 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:07:30.446394  169856 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:07:30.459603  169856 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:07:30.474488  169856 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1101 09:07:30.474549  169856 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:07:30.483932  169856 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:07:30.484010  169856 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:07:30.493442  169856 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:07:30.502636  169856 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:07:30.511854  169856 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:07:30.520589  169856 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:07:30.530230  169856 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:07:30.539176  169856 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:07:30.548757  169856 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:07:30.556640  169856 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:07:30.564758  169856 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:07:30.647959  169856 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:07:30.758176  169856 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:07:30.758244  169856 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:07:30.762562  169856 start.go:564] Will wait 60s for crictl version
	I1101 09:07:30.762620  169856 ssh_runner.go:195] Run: which crictl
	I1101 09:07:30.766500  169856 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:07:30.791759  169856 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:07:30.791909  169856 ssh_runner.go:195] Run: crio --version
	I1101 09:07:30.820348  169856 ssh_runner.go:195] Run: crio --version
	I1101 09:07:30.850835  169856 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.34.1 ...
	I1101 09:07:30.852134  169856 cli_runner.go:164] Run: docker network inspect test-preload-398259 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:07:30.869632  169856 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 09:07:30.873832  169856 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:07:30.884595  169856 kubeadm.go:884] updating cluster {Name:test-preload-398259 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-398259 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:07:30.884713  169856 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 09:07:30.884777  169856 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:07:30.917463  169856 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:07:30.917485  169856 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:07:30.917543  169856 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:07:30.945524  169856 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:07:30.945548  169856 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:07:30.945556  169856 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.32.0 crio true true} ...
	I1101 09:07:30.945649  169856 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=test-preload-398259 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-398259 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:07:30.945709  169856 ssh_runner.go:195] Run: crio config
	I1101 09:07:30.993796  169856 cni.go:84] Creating CNI manager for ""
	I1101 09:07:30.993822  169856 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:07:30.993842  169856 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:07:30.993873  169856 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-398259 NodeName:test-preload-398259 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:07:30.994033  169856 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-398259"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:07:30.994098  169856 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1101 09:07:31.002561  169856 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:07:31.002628  169856 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:07:31.010968  169856 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1101 09:07:31.024334  169856 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:07:31.037842  169856 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1101 09:07:31.051430  169856 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:07:31.055380  169856 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:07:31.065917  169856 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:07:31.147018  169856 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:07:31.169655  169856 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259 for IP: 192.168.76.2
	I1101 09:07:31.169695  169856 certs.go:195] generating shared ca certs ...
	I1101 09:07:31.169716  169856 certs.go:227] acquiring lock for ca certs: {Name:mkfdee6a84670347521013ebeef165551380cb9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:07:31.169925  169856 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key
	I1101 09:07:31.169976  169856 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key
	I1101 09:07:31.169988  169856 certs.go:257] generating profile certs ...
	I1101 09:07:31.170133  169856 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/client.key
	I1101 09:07:31.170218  169856 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/apiserver.key.44ab06a5
	I1101 09:07:31.170270  169856 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/proxy-client.key
	I1101 09:07:31.170412  169856 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem (1338 bytes)
	W1101 09:07:31.170452  169856 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414_empty.pem, impossibly tiny 0 bytes
	I1101 09:07:31.170465  169856 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:07:31.170498  169856 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:07:31.170529  169856 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:07:31.170561  169856 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem (1675 bytes)
	I1101 09:07:31.170613  169856 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:07:31.171261  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:07:31.190734  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:07:31.211291  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:07:31.231921  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:07:31.258778  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 09:07:31.278278  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:07:31.296525  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:07:31.315168  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:07:31.333285  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:07:31.351764  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem --> /usr/share/ca-certificates/9414.pem (1338 bytes)
	I1101 09:07:31.371631  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /usr/share/ca-certificates/94142.pem (1708 bytes)
	I1101 09:07:31.390173  169856 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:07:31.403459  169856 ssh_runner.go:195] Run: openssl version
	I1101 09:07:31.409783  169856 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9414.pem && ln -fs /usr/share/ca-certificates/9414.pem /etc/ssl/certs/9414.pem"
	I1101 09:07:31.419122  169856 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9414.pem
	I1101 09:07:31.423555  169856 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:35 /usr/share/ca-certificates/9414.pem
	I1101 09:07:31.423619  169856 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9414.pem
	I1101 09:07:31.458785  169856 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9414.pem /etc/ssl/certs/51391683.0"
	I1101 09:07:31.467497  169856 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94142.pem && ln -fs /usr/share/ca-certificates/94142.pem /etc/ssl/certs/94142.pem"
	I1101 09:07:31.476838  169856 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94142.pem
	I1101 09:07:31.481115  169856 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:35 /usr/share/ca-certificates/94142.pem
	I1101 09:07:31.481183  169856 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94142.pem
	I1101 09:07:31.515333  169856 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94142.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:07:31.524273  169856 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:07:31.533442  169856 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:07:31.537719  169856 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:07:31.537784  169856 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:07:31.573526  169856 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:07:31.582343  169856 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:07:31.586484  169856 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:07:31.621235  169856 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:07:31.656176  169856 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:07:31.698971  169856 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:07:31.745754  169856 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:07:31.785230  169856 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:07:31.820440  169856 kubeadm.go:401] StartCluster: {Name:test-preload-398259 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-398259 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:07:31.820522  169856 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:07:31.820572  169856 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:07:31.849014  169856 cri.go:89] found id: ""
	I1101 09:07:31.849088  169856 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:07:31.858001  169856 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:07:31.858075  169856 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:07:31.858137  169856 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:07:31.866817  169856 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:07:31.867298  169856 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-398259" does not appear in /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:07:31.867442  169856 kubeconfig.go:62] /home/jenkins/minikube-integration/21835-5913/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-398259" cluster setting kubeconfig missing "test-preload-398259" context setting]
	I1101 09:07:31.867791  169856 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:07:31.868324  169856 kapi.go:59] client config for test-preload-398259: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/client.crt", KeyFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/client.key", CAFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:07:31.868755  169856 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 09:07:31.868773  169856 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 09:07:31.868780  169856 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 09:07:31.868785  169856 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 09:07:31.868790  169856 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 09:07:31.869142  169856 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:07:31.878156  169856 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 09:07:31.878192  169856 kubeadm.go:602] duration metric: took 20.105577ms to restartPrimaryControlPlane
	I1101 09:07:31.878202  169856 kubeadm.go:403] duration metric: took 57.770466ms to StartCluster
	I1101 09:07:31.878217  169856 settings.go:142] acquiring lock: {Name:mkb1ba7d0d4bb15f3f0746ce486d72703f901580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:07:31.878298  169856 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:07:31.879111  169856 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:07:31.879512  169856 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:07:31.879612  169856 config.go:182] Loaded profile config "test-preload-398259": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 09:07:31.879576  169856 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:07:31.879667  169856 addons.go:70] Setting storage-provisioner=true in profile "test-preload-398259"
	I1101 09:07:31.879680  169856 addons.go:70] Setting default-storageclass=true in profile "test-preload-398259"
	I1101 09:07:31.879683  169856 addons.go:239] Setting addon storage-provisioner=true in "test-preload-398259"
	W1101 09:07:31.879691  169856 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:07:31.879693  169856 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-398259"
	I1101 09:07:31.879733  169856 host.go:66] Checking if "test-preload-398259" exists ...
	I1101 09:07:31.880030  169856 cli_runner.go:164] Run: docker container inspect test-preload-398259 --format={{.State.Status}}
	I1101 09:07:31.880176  169856 cli_runner.go:164] Run: docker container inspect test-preload-398259 --format={{.State.Status}}
	I1101 09:07:31.884611  169856 out.go:179] * Verifying Kubernetes components...
	I1101 09:07:31.885994  169856 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:07:31.901280  169856 kapi.go:59] client config for test-preload-398259: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/client.crt", KeyFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/client.key", CAFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:07:31.901646  169856 addons.go:239] Setting addon default-storageclass=true in "test-preload-398259"
	W1101 09:07:31.901669  169856 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:07:31.901698  169856 host.go:66] Checking if "test-preload-398259" exists ...
	I1101 09:07:31.902208  169856 cli_runner.go:164] Run: docker container inspect test-preload-398259 --format={{.State.Status}}
	I1101 09:07:31.903350  169856 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:07:31.905088  169856 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:07:31.905108  169856 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:07:31.905168  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:31.927612  169856 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:07:31.927641  169856 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:07:31.927775  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:31.931975  169856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/test-preload-398259/id_rsa Username:docker}
	I1101 09:07:31.948750  169856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/test-preload-398259/id_rsa Username:docker}
	I1101 09:07:31.987968  169856 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:07:32.001931  169856 node_ready.go:35] waiting up to 6m0s for node "test-preload-398259" to be "Ready" ...
	I1101 09:07:32.039803  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:07:32.059305  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:32.103997  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:32.104045  169856 retry.go:31] will retry after 326.869362ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:07:32.120927  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:32.120972  169856 retry.go:31] will retry after 237.03307ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:32.358422  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:32.414234  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:32.414273  169856 retry.go:31] will retry after 268.036048ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:32.431506  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:07:32.488128  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:32.488163  169856 retry.go:31] will retry after 464.262942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:32.683268  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:32.740397  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:32.740433  169856 retry.go:31] will retry after 389.349488ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:32.952728  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:07:33.010489  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:33.010536  169856 retry.go:31] will retry after 505.993678ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:33.130842  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:33.185032  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:33.185060  169856 retry.go:31] will retry after 569.686637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:33.516751  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:07:33.573891  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:33.573920  169856 retry.go:31] will retry after 649.976315ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:33.755186  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:33.813562  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:33.813602  169856 retry.go:31] will retry after 1.201195402s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:07:34.003513  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:07:34.224854  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:07:34.282001  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:34.282033  169856 retry.go:31] will retry after 1.62604091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:35.014977  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:35.070876  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:35.070925  169856 retry.go:31] will retry after 1.955150497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:35.908272  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:07:35.965132  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:35.965166  169856 retry.go:31] will retry after 1.145359452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:07:36.502766  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:07:37.027284  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:37.085347  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:37.085379  169856 retry.go:31] will retry after 1.834341161s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:37.111629  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:07:37.169028  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:37.169060  169856 retry.go:31] will retry after 2.824480099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:07:38.503498  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:07:38.920043  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:38.976952  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:38.976992  169856 retry.go:31] will retry after 6.326184229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:39.994208  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:07:40.053350  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:40.053380  169856 retry.go:31] will retry after 3.165551359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:07:40.503545  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:07:43.002892  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:07:43.219209  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:07:43.275805  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:43.275833  169856 retry.go:31] will retry after 8.788910692s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:45.303334  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:45.360611  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:45.360647  169856 retry.go:31] will retry after 4.530632619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:07:45.503308  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:07:48.003158  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:07:49.892308  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:49.949937  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:49.949976  169856 retry.go:31] will retry after 5.985181439s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:07:50.503522  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:07:52.064986  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:07:52.120651  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:52.120685  169856 retry.go:31] will retry after 10.924312634s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:07:53.002659  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:07:55.502766  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:07:55.936375  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:55.993387  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:55.993425  169856 retry.go:31] will retry after 12.747359897s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:07:57.503016  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:00.002675  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:02.003026  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:08:03.045449  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:08:03.102161  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:08:03.102196  169856 retry.go:31] will retry after 18.789725991s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:08:04.003350  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:06.502604  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:08.502940  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:08:08.741355  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:08:08.798604  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:08:08.798636  169856 retry.go:31] will retry after 27.473065748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:08:11.002695  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:13.003157  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:15.003463  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:17.003547  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:19.502544  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:21.503549  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:08:21.892328  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:08:21.949295  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:08:21.949331  169856 retry.go:31] will retry after 30.093713452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:08:24.002847  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:26.502667  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:28.503560  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:31.003108  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:33.502661  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:35.502782  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:08:36.272311  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:08:36.326324  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:08:36.326357  169856 retry.go:31] will retry after 36.515182494s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:08:37.503665  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:40.002549  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:42.002904  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:44.003432  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:46.502577  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:48.503557  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:51.002577  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:08:52.044064  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:08:52.099788  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:08:52.099824  169856 retry.go:31] will retry after 17.28065807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:08:53.502627  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:55.503283  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:58.002848  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:00.502791  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:03.003198  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:05.502772  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:07.503386  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:09:09.381371  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:09:09.438826  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:09:09.438962  169856 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1101 09:09:09.503535  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:12.002568  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:09:12.842260  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:09:12.896883  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:09:12.897013  169856 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 09:09:12.898894  169856 out.go:179] * Enabled addons: 
	I1101 09:09:12.900429  169856 addons.go:515] duration metric: took 1m41.020853674s for enable addons: enabled=[]
	W1101 09:09:14.002688  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:16.502619  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:18.503391  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:21.002954  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:23.502849  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:25.503054  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:28.003201  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:30.502628  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:32.503011  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:35.002676  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:37.003350  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:39.503343  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:42.002850  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:44.502603  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:46.502822  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:49.002625  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:51.502740  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:53.503520  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:56.002800  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:58.003541  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:00.502804  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:03.002704  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:05.003342  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:07.502816  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:10.002805  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:12.003525  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:14.503301  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:17.002989  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:19.502707  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:21.503577  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:24.003223  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:26.502749  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:28.503361  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:31.002953  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:33.502758  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:35.503454  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:38.003140  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:40.502767  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:43.002731  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:45.003360  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:47.503416  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:50.002946  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:52.502749  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:55.002631  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:57.003511  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:59.503454  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:02.002954  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:04.502945  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:07.002607  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:09.003483  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:11.502915  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:14.002830  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:16.003565  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:18.503330  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:21.002984  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:23.502898  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:25.503537  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:28.003279  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:30.003596  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:32.502910  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:34.503511  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:37.002573  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:39.002749  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:41.502648  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:43.503457  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:46.002582  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:48.002825  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:50.502739  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:52.502831  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:54.503534  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:57.003627  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:59.502669  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:02.002860  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:04.003305  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:06.003471  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:08.502696  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:11.002566  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:13.002955  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:15.502731  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:17.503580  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:20.002761  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:22.002919  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:24.003390  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:26.502567  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:28.502763  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:31.002683  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:33.003279  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:35.503220  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:37.503512  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:40.003584  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:42.502634  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:44.502727  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:47.002667  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:49.002973  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:51.502945  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:54.003579  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:56.503518  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:59.002746  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:01.503591  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:04.003334  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:06.503479  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:09.002719  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:11.003538  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:13.503462  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:16.003637  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:18.502719  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:21.002643  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:23.003206  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:25.003575  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:27.502672  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:29.503501  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:31.503582  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:13:32.002410  169856 node_ready.go:38] duration metric: took 6m0.000402692s for node "test-preload-398259" to be "Ready" ...
	I1101 09:13:32.004663  169856 out.go:203] 
	W1101 09:13:32.006128  169856 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1101 09:13:32.006145  169856 out.go:285] * 
	* 
	W1101 09:13:32.007793  169856 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:13:32.009521  169856 out.go:203] 

                                                
                                                
** /stderr **
preload_test.go:67: out/minikube-linux-amd64 start -p test-preload-398259 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio failed: exit status 80
panic.go:636: *** TestPreload FAILED at 2025-11-01 09:13:32.045874413 +0000 UTC m=+2664.836866488
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect test-preload-398259
helpers_test.go:243: (dbg) docker inspect test-preload-398259:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "16e9940a9be076068f08dabc406c5ec4803e0c4fe646037d296869f7cf963649",
	        "Created": "2025-11-01T09:06:30.845999351Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 170074,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:07:25.233235605Z",
	            "FinishedAt": "2025-11-01T09:07:22.205216671Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/16e9940a9be076068f08dabc406c5ec4803e0c4fe646037d296869f7cf963649/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/16e9940a9be076068f08dabc406c5ec4803e0c4fe646037d296869f7cf963649/hostname",
	        "HostsPath": "/var/lib/docker/containers/16e9940a9be076068f08dabc406c5ec4803e0c4fe646037d296869f7cf963649/hosts",
	        "LogPath": "/var/lib/docker/containers/16e9940a9be076068f08dabc406c5ec4803e0c4fe646037d296869f7cf963649/16e9940a9be076068f08dabc406c5ec4803e0c4fe646037d296869f7cf963649-json.log",
	        "Name": "/test-preload-398259",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "test-preload-398259:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "test-preload-398259",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "16e9940a9be076068f08dabc406c5ec4803e0c4fe646037d296869f7cf963649",
	                "LowerDir": "/var/lib/docker/overlay2/d2e090dfc604fe9a97dcb305d86348dcf55ba4eabb716a575beee5182675db80-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d2e090dfc604fe9a97dcb305d86348dcf55ba4eabb716a575beee5182675db80/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d2e090dfc604fe9a97dcb305d86348dcf55ba4eabb716a575beee5182675db80/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d2e090dfc604fe9a97dcb305d86348dcf55ba4eabb716a575beee5182675db80/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "test-preload-398259",
	                "Source": "/var/lib/docker/volumes/test-preload-398259/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-398259",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-398259",
	                "name.minikube.sigs.k8s.io": "test-preload-398259",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "54b1047e0995c83d4ba4e1008b510f069ffff7b9d8505a61fbe7050ed5ecbbfb",
	            "SandboxKey": "/var/run/docker/netns/54b1047e0995",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32958"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32959"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32962"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32960"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32961"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-398259": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:db:27:88:51:23",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "05ce69d7b0723307c0477565692b281160f7b2326cc97731a9198b114329d699",
	                    "EndpointID": "81cac469f749ea9e793a39c62a8574735c1256fb18883401e26d9799937edb03",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "test-preload-398259",
	                        "16e9940a9be0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-398259 -n test-preload-398259
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-398259 -n test-preload-398259: exit status 2 (311.593724ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-398259 logs -n 25
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ multinode-548731 cp multinode-548731-m03:/home/docker/cp-test.txt multinode-548731:/home/docker/cp-test_multinode-548731-m03_multinode-548731.txt         │ multinode-548731     │ jenkins │ v1.37.0 │ 01 Nov 25 09:03 UTC │ 01 Nov 25 09:03 UTC │
	│ ssh     │ multinode-548731 ssh -n multinode-548731-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-548731     │ jenkins │ v1.37.0 │ 01 Nov 25 09:03 UTC │ 01 Nov 25 09:03 UTC │
	│ ssh     │ multinode-548731 ssh -n multinode-548731 sudo cat /home/docker/cp-test_multinode-548731-m03_multinode-548731.txt                                          │ multinode-548731     │ jenkins │ v1.37.0 │ 01 Nov 25 09:03 UTC │ 01 Nov 25 09:03 UTC │
	│ cp      │ multinode-548731 cp multinode-548731-m03:/home/docker/cp-test.txt multinode-548731-m02:/home/docker/cp-test_multinode-548731-m03_multinode-548731-m02.txt │ multinode-548731     │ jenkins │ v1.37.0 │ 01 Nov 25 09:03 UTC │ 01 Nov 25 09:03 UTC │
	│ ssh     │ multinode-548731 ssh -n multinode-548731-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-548731     │ jenkins │ v1.37.0 │ 01 Nov 25 09:03 UTC │ 01 Nov 25 09:03 UTC │
	│ ssh     │ multinode-548731 ssh -n multinode-548731-m02 sudo cat /home/docker/cp-test_multinode-548731-m03_multinode-548731-m02.txt                                  │ multinode-548731     │ jenkins │ v1.37.0 │ 01 Nov 25 09:03 UTC │ 01 Nov 25 09:03 UTC │
	│ node    │ multinode-548731 node stop m03                                                                                                                            │ multinode-548731     │ jenkins │ v1.37.0 │ 01 Nov 25 09:03 UTC │ 01 Nov 25 09:03 UTC │
	│ node    │ multinode-548731 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-548731     │ jenkins │ v1.37.0 │ 01 Nov 25 09:03 UTC │ 01 Nov 25 09:03 UTC │
	│ node    │ list -p multinode-548731                                                                                                                                  │ multinode-548731     │ jenkins │ v1.37.0 │ 01 Nov 25 09:03 UTC │                     │
	│ stop    │ -p multinode-548731                                                                                                                                       │ multinode-548731     │ jenkins │ v1.37.0 │ 01 Nov 25 09:03 UTC │ 01 Nov 25 09:03 UTC │
	│ start   │ -p multinode-548731 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-548731     │ jenkins │ v1.37.0 │ 01 Nov 25 09:03 UTC │ 01 Nov 25 09:04 UTC │
	│ node    │ list -p multinode-548731                                                                                                                                  │ multinode-548731     │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │                     │
	│ node    │ multinode-548731 node delete m03                                                                                                                          │ multinode-548731     │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │ 01 Nov 25 09:04 UTC │
	│ stop    │ multinode-548731 stop                                                                                                                                     │ multinode-548731     │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │ 01 Nov 25 09:05 UTC │
	│ start   │ -p multinode-548731 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio                                                          │ multinode-548731     │ jenkins │ v1.37.0 │ 01 Nov 25 09:05 UTC │ 01 Nov 25 09:06 UTC │
	│ node    │ list -p multinode-548731                                                                                                                                  │ multinode-548731     │ jenkins │ v1.37.0 │ 01 Nov 25 09:06 UTC │                     │
	│ start   │ -p multinode-548731-m02 --driver=docker  --container-runtime=crio                                                                                         │ multinode-548731-m02 │ jenkins │ v1.37.0 │ 01 Nov 25 09:06 UTC │                     │
	│ start   │ -p multinode-548731-m03 --driver=docker  --container-runtime=crio                                                                                         │ multinode-548731-m03 │ jenkins │ v1.37.0 │ 01 Nov 25 09:06 UTC │ 01 Nov 25 09:06 UTC │
	│ node    │ add -p multinode-548731                                                                                                                                   │ multinode-548731     │ jenkins │ v1.37.0 │ 01 Nov 25 09:06 UTC │                     │
	│ delete  │ -p multinode-548731-m03                                                                                                                                   │ multinode-548731-m03 │ jenkins │ v1.37.0 │ 01 Nov 25 09:06 UTC │ 01 Nov 25 09:06 UTC │
	│ delete  │ -p multinode-548731                                                                                                                                       │ multinode-548731     │ jenkins │ v1.37.0 │ 01 Nov 25 09:06 UTC │ 01 Nov 25 09:06 UTC │
	│ start   │ -p test-preload-398259 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0 │ test-preload-398259  │ jenkins │ v1.37.0 │ 01 Nov 25 09:06 UTC │ 01 Nov 25 09:07 UTC │
	│ image   │ test-preload-398259 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-398259  │ jenkins │ v1.37.0 │ 01 Nov 25 09:07 UTC │ 01 Nov 25 09:07 UTC │
	│ stop    │ -p test-preload-398259                                                                                                                                    │ test-preload-398259  │ jenkins │ v1.37.0 │ 01 Nov 25 09:07 UTC │ 01 Nov 25 09:07 UTC │
	│ start   │ -p test-preload-398259 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                         │ test-preload-398259  │ jenkins │ v1.37.0 │ 01 Nov 25 09:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:07:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:07:22.619970  169856 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:07:22.620138  169856 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:07:22.620150  169856 out.go:374] Setting ErrFile to fd 2...
	I1101 09:07:22.620158  169856 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:07:22.620361  169856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:07:22.620881  169856 out.go:368] Setting JSON to false
	I1101 09:07:22.621795  169856 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2991,"bootTime":1761985052,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:07:22.621914  169856 start.go:143] virtualization: kvm guest
	I1101 09:07:22.624167  169856 out.go:179] * [test-preload-398259] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:07:22.625767  169856 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:07:22.625766  169856 notify.go:221] Checking for updates...
	I1101 09:07:22.628595  169856 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:07:22.629972  169856 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:07:22.631423  169856 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:07:22.632827  169856 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:07:22.633890  169856 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:07:22.635434  169856 config.go:182] Loaded profile config "test-preload-398259": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 09:07:22.637442  169856 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1101 09:07:22.638714  169856 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:07:22.663387  169856 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:07:22.663550  169856 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:07:22.725211  169856 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-01 09:07:22.714668262 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:07:22.725357  169856 docker.go:319] overlay module found
	I1101 09:07:22.727176  169856 out.go:179] * Using the docker driver based on existing profile
	I1101 09:07:22.728596  169856 start.go:309] selected driver: docker
	I1101 09:07:22.728615  169856 start.go:930] validating driver "docker" against &{Name:test-preload-398259 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-398259 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:07:22.728734  169856 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:07:22.729428  169856 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:07:22.787986  169856 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-01 09:07:22.777591895 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:07:22.788268  169856 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:07:22.788300  169856 cni.go:84] Creating CNI manager for ""
	I1101 09:07:22.788351  169856 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:07:22.788392  169856 start.go:353] cluster config:
	{Name:test-preload-398259 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-398259 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:07:22.790265  169856 out.go:179] * Starting "test-preload-398259" primary control-plane node in "test-preload-398259" cluster
	I1101 09:07:22.791648  169856 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:07:22.793272  169856 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:07:22.794598  169856 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 09:07:22.794793  169856 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:07:22.817320  169856 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:07:22.817345  169856 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:07:22.820697  169856 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1101 09:07:22.820718  169856 cache.go:59] Caching tarball of preloaded images
	I1101 09:07:22.820918  169856 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 09:07:22.822746  169856 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1101 09:07:22.824027  169856 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1101 09:07:22.856315  169856 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1101 09:07:22.856359  169856 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1101 09:07:25.183318  169856 cache.go:62] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1101 09:07:25.183483  169856 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/config.json ...
	I1101 09:07:25.183724  169856 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:07:25.183759  169856 start.go:360] acquireMachinesLock for test-preload-398259: {Name:mkc366bf054d0d01534a44955f2a762b7ac566a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:07:25.183822  169856 start.go:364] duration metric: took 43.74µs to acquireMachinesLock for "test-preload-398259"
	I1101 09:07:25.183837  169856 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:07:25.183842  169856 fix.go:54] fixHost starting: 
	I1101 09:07:25.184096  169856 cli_runner.go:164] Run: docker container inspect test-preload-398259 --format={{.State.Status}}
	I1101 09:07:25.202269  169856 fix.go:112] recreateIfNeeded on test-preload-398259: state=Stopped err=<nil>
	W1101 09:07:25.202307  169856 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:07:25.205156  169856 out.go:252] * Restarting existing docker container for "test-preload-398259" ...
	I1101 09:07:25.205230  169856 cli_runner.go:164] Run: docker start test-preload-398259
	I1101 09:07:25.456435  169856 cli_runner.go:164] Run: docker container inspect test-preload-398259 --format={{.State.Status}}
	I1101 09:07:25.475964  169856 kic.go:430] container "test-preload-398259" state is running.
	I1101 09:07:25.476443  169856 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-398259
	I1101 09:07:25.495920  169856 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/config.json ...
	I1101 09:07:25.496200  169856 machine.go:94] provisionDockerMachine start ...
	I1101 09:07:25.496285  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:25.516522  169856 main.go:143] libmachine: Using SSH client type: native
	I1101 09:07:25.516770  169856 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1101 09:07:25.516799  169856 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:07:25.517479  169856 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33446->127.0.0.1:32958: read: connection reset by peer
	I1101 09:07:28.661402  169856 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-398259
	
	I1101 09:07:28.661441  169856 ubuntu.go:182] provisioning hostname "test-preload-398259"
	I1101 09:07:28.661505  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:28.681422  169856 main.go:143] libmachine: Using SSH client type: native
	I1101 09:07:28.681689  169856 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1101 09:07:28.681705  169856 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-398259 && echo "test-preload-398259" | sudo tee /etc/hostname
	I1101 09:07:28.834185  169856 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-398259
	
	I1101 09:07:28.834286  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:28.853650  169856 main.go:143] libmachine: Using SSH client type: native
	I1101 09:07:28.853907  169856 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1101 09:07:28.853927  169856 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-398259' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-398259/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-398259' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:07:28.995899  169856 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:07:28.995926  169856 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5913/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5913/.minikube}
	I1101 09:07:28.995950  169856 ubuntu.go:190] setting up certificates
	I1101 09:07:28.995963  169856 provision.go:84] configureAuth start
	I1101 09:07:28.996026  169856 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-398259
	I1101 09:07:29.015430  169856 provision.go:143] copyHostCerts
	I1101 09:07:29.015506  169856 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem, removing ...
	I1101 09:07:29.015531  169856 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem
	I1101 09:07:29.015624  169856 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem (1123 bytes)
	I1101 09:07:29.015784  169856 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem, removing ...
	I1101 09:07:29.015802  169856 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem
	I1101 09:07:29.015839  169856 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem (1675 bytes)
	I1101 09:07:29.015953  169856 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem, removing ...
	I1101 09:07:29.015965  169856 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem
	I1101 09:07:29.016012  169856 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem (1078 bytes)
	I1101 09:07:29.016091  169856 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem org=jenkins.test-preload-398259 san=[127.0.0.1 192.168.76.2 localhost minikube test-preload-398259]
	I1101 09:07:29.049235  169856 provision.go:177] copyRemoteCerts
	I1101 09:07:29.049295  169856 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:07:29.049329  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:29.068681  169856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/test-preload-398259/id_rsa Username:docker}
	I1101 09:07:29.169520  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:07:29.188542  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 09:07:29.207336  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:07:29.225975  169856 provision.go:87] duration metric: took 229.992409ms to configureAuth
	I1101 09:07:29.226011  169856 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:07:29.226207  169856 config.go:182] Loaded profile config "test-preload-398259": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 09:07:29.226324  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:29.245034  169856 main.go:143] libmachine: Using SSH client type: native
	I1101 09:07:29.245260  169856 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32958 <nil> <nil>}
	I1101 09:07:29.245283  169856 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:07:29.524358  169856 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:07:29.524383  169856 machine.go:97] duration metric: took 4.028165594s to provisionDockerMachine
	I1101 09:07:29.524396  169856 start.go:293] postStartSetup for "test-preload-398259" (driver="docker")
	I1101 09:07:29.524410  169856 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:07:29.524480  169856 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:07:29.524530  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:29.543471  169856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/test-preload-398259/id_rsa Username:docker}
	I1101 09:07:29.645763  169856 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:07:29.649723  169856 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:07:29.649761  169856 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:07:29.649784  169856 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/addons for local assets ...
	I1101 09:07:29.649836  169856 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/files for local assets ...
	I1101 09:07:29.649937  169856 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem -> 94142.pem in /etc/ssl/certs
	I1101 09:07:29.650042  169856 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:07:29.658329  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:07:29.677303  169856 start.go:296] duration metric: took 152.888972ms for postStartSetup
	I1101 09:07:29.677395  169856 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:07:29.677432  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:29.696350  169856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/test-preload-398259/id_rsa Username:docker}
	I1101 09:07:29.795418  169856 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:07:29.800303  169856 fix.go:56] duration metric: took 4.616452628s for fixHost
	I1101 09:07:29.800335  169856 start.go:83] releasing machines lock for "test-preload-398259", held for 4.616501942s
	I1101 09:07:29.800498  169856 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-398259
	I1101 09:07:29.819737  169856 ssh_runner.go:195] Run: cat /version.json
	I1101 09:07:29.819802  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:29.819806  169856 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:07:29.819873  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:29.840121  169856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/test-preload-398259/id_rsa Username:docker}
	I1101 09:07:29.841298  169856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/test-preload-398259/id_rsa Username:docker}
	I1101 09:07:29.938702  169856 ssh_runner.go:195] Run: systemctl --version
	I1101 09:07:29.993349  169856 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:07:30.030777  169856 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:07:30.035651  169856 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:07:30.035723  169856 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:07:30.044111  169856 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:07:30.044135  169856 start.go:496] detecting cgroup driver to use...
	I1101 09:07:30.044168  169856 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:07:30.044227  169856 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:07:30.060037  169856 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:07:30.073218  169856 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:07:30.073278  169856 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:07:30.088126  169856 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:07:30.101302  169856 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:07:30.178781  169856 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:07:30.257124  169856 docker.go:234] disabling docker service ...
	I1101 09:07:30.257198  169856 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:07:30.271878  169856 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:07:30.284778  169856 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:07:30.364102  169856 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:07:30.446394  169856 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:07:30.459603  169856 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:07:30.474488  169856 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1101 09:07:30.474549  169856 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:07:30.483932  169856 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:07:30.484010  169856 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:07:30.493442  169856 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:07:30.502636  169856 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:07:30.511854  169856 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:07:30.520589  169856 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:07:30.530230  169856 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:07:30.539176  169856 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:07:30.548757  169856 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:07:30.556640  169856 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:07:30.564758  169856 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:07:30.647959  169856 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:07:30.758176  169856 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:07:30.758244  169856 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:07:30.762562  169856 start.go:564] Will wait 60s for crictl version
	I1101 09:07:30.762620  169856 ssh_runner.go:195] Run: which crictl
	I1101 09:07:30.766500  169856 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:07:30.791759  169856 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:07:30.791909  169856 ssh_runner.go:195] Run: crio --version
	I1101 09:07:30.820348  169856 ssh_runner.go:195] Run: crio --version
	I1101 09:07:30.850835  169856 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.34.1 ...
	I1101 09:07:30.852134  169856 cli_runner.go:164] Run: docker network inspect test-preload-398259 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:07:30.869632  169856 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 09:07:30.873832  169856 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:07:30.884595  169856 kubeadm.go:884] updating cluster {Name:test-preload-398259 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-398259 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:07:30.884713  169856 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 09:07:30.884777  169856 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:07:30.917463  169856 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:07:30.917485  169856 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:07:30.917543  169856 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:07:30.945524  169856 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:07:30.945548  169856 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:07:30.945556  169856 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.32.0 crio true true} ...
	I1101 09:07:30.945649  169856 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=test-preload-398259 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-398259 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:07:30.945709  169856 ssh_runner.go:195] Run: crio config
	I1101 09:07:30.993796  169856 cni.go:84] Creating CNI manager for ""
	I1101 09:07:30.993822  169856 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:07:30.993842  169856 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:07:30.993873  169856 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-398259 NodeName:test-preload-398259 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:07:30.994033  169856 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-398259"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:07:30.994098  169856 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1101 09:07:31.002561  169856 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:07:31.002628  169856 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:07:31.010968  169856 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1101 09:07:31.024334  169856 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:07:31.037842  169856 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1101 09:07:31.051430  169856 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:07:31.055380  169856 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:07:31.065917  169856 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:07:31.147018  169856 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:07:31.169655  169856 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259 for IP: 192.168.76.2
	I1101 09:07:31.169695  169856 certs.go:195] generating shared ca certs ...
	I1101 09:07:31.169716  169856 certs.go:227] acquiring lock for ca certs: {Name:mkfdee6a84670347521013ebeef165551380cb9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:07:31.169925  169856 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key
	I1101 09:07:31.169976  169856 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key
	I1101 09:07:31.169988  169856 certs.go:257] generating profile certs ...
	I1101 09:07:31.170133  169856 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/client.key
	I1101 09:07:31.170218  169856 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/apiserver.key.44ab06a5
	I1101 09:07:31.170270  169856 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/proxy-client.key
	I1101 09:07:31.170412  169856 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem (1338 bytes)
	W1101 09:07:31.170452  169856 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414_empty.pem, impossibly tiny 0 bytes
	I1101 09:07:31.170465  169856 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:07:31.170498  169856 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:07:31.170529  169856 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:07:31.170561  169856 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem (1675 bytes)
	I1101 09:07:31.170613  169856 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:07:31.171261  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:07:31.190734  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:07:31.211291  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:07:31.231921  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:07:31.258778  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 09:07:31.278278  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:07:31.296525  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:07:31.315168  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:07:31.333285  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:07:31.351764  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem --> /usr/share/ca-certificates/9414.pem (1338 bytes)
	I1101 09:07:31.371631  169856 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /usr/share/ca-certificates/94142.pem (1708 bytes)
	I1101 09:07:31.390173  169856 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:07:31.403459  169856 ssh_runner.go:195] Run: openssl version
	I1101 09:07:31.409783  169856 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9414.pem && ln -fs /usr/share/ca-certificates/9414.pem /etc/ssl/certs/9414.pem"
	I1101 09:07:31.419122  169856 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9414.pem
	I1101 09:07:31.423555  169856 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:35 /usr/share/ca-certificates/9414.pem
	I1101 09:07:31.423619  169856 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9414.pem
	I1101 09:07:31.458785  169856 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9414.pem /etc/ssl/certs/51391683.0"
	I1101 09:07:31.467497  169856 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94142.pem && ln -fs /usr/share/ca-certificates/94142.pem /etc/ssl/certs/94142.pem"
	I1101 09:07:31.476838  169856 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94142.pem
	I1101 09:07:31.481115  169856 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:35 /usr/share/ca-certificates/94142.pem
	I1101 09:07:31.481183  169856 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94142.pem
	I1101 09:07:31.515333  169856 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94142.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:07:31.524273  169856 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:07:31.533442  169856 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:07:31.537719  169856 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:07:31.537784  169856 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:07:31.573526  169856 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:07:31.582343  169856 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:07:31.586484  169856 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:07:31.621235  169856 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:07:31.656176  169856 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:07:31.698971  169856 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:07:31.745754  169856 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:07:31.785230  169856 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:07:31.820440  169856 kubeadm.go:401] StartCluster: {Name:test-preload-398259 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-398259 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:07:31.820522  169856 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:07:31.820572  169856 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:07:31.849014  169856 cri.go:89] found id: ""
	I1101 09:07:31.849088  169856 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:07:31.858001  169856 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:07:31.858075  169856 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:07:31.858137  169856 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:07:31.866817  169856 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:07:31.867298  169856 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-398259" does not appear in /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:07:31.867442  169856 kubeconfig.go:62] /home/jenkins/minikube-integration/21835-5913/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-398259" cluster setting kubeconfig missing "test-preload-398259" context setting]
	I1101 09:07:31.867791  169856 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:07:31.868324  169856 kapi.go:59] client config for test-preload-398259: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/client.crt", KeyFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/client.key", CAFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:07:31.868755  169856 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 09:07:31.868773  169856 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 09:07:31.868780  169856 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 09:07:31.868785  169856 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 09:07:31.868790  169856 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 09:07:31.869142  169856 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:07:31.878156  169856 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 09:07:31.878192  169856 kubeadm.go:602] duration metric: took 20.105577ms to restartPrimaryControlPlane
	I1101 09:07:31.878202  169856 kubeadm.go:403] duration metric: took 57.770466ms to StartCluster
	I1101 09:07:31.878217  169856 settings.go:142] acquiring lock: {Name:mkb1ba7d0d4bb15f3f0746ce486d72703f901580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:07:31.878298  169856 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:07:31.879111  169856 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:07:31.879512  169856 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:07:31.879612  169856 config.go:182] Loaded profile config "test-preload-398259": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 09:07:31.879576  169856 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:07:31.879667  169856 addons.go:70] Setting storage-provisioner=true in profile "test-preload-398259"
	I1101 09:07:31.879680  169856 addons.go:70] Setting default-storageclass=true in profile "test-preload-398259"
	I1101 09:07:31.879683  169856 addons.go:239] Setting addon storage-provisioner=true in "test-preload-398259"
	W1101 09:07:31.879691  169856 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:07:31.879693  169856 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-398259"
	I1101 09:07:31.879733  169856 host.go:66] Checking if "test-preload-398259" exists ...
	I1101 09:07:31.880030  169856 cli_runner.go:164] Run: docker container inspect test-preload-398259 --format={{.State.Status}}
	I1101 09:07:31.880176  169856 cli_runner.go:164] Run: docker container inspect test-preload-398259 --format={{.State.Status}}
	I1101 09:07:31.884611  169856 out.go:179] * Verifying Kubernetes components...
	I1101 09:07:31.885994  169856 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:07:31.901280  169856 kapi.go:59] client config for test-preload-398259: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/client.crt", KeyFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/profiles/test-preload-398259/client.key", CAFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:07:31.901646  169856 addons.go:239] Setting addon default-storageclass=true in "test-preload-398259"
	W1101 09:07:31.901669  169856 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:07:31.901698  169856 host.go:66] Checking if "test-preload-398259" exists ...
	I1101 09:07:31.902208  169856 cli_runner.go:164] Run: docker container inspect test-preload-398259 --format={{.State.Status}}
	I1101 09:07:31.903350  169856 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:07:31.905088  169856 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:07:31.905108  169856 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:07:31.905168  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:31.927612  169856 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:07:31.927641  169856 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:07:31.927775  169856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-398259
	I1101 09:07:31.931975  169856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/test-preload-398259/id_rsa Username:docker}
	I1101 09:07:31.948750  169856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32958 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/test-preload-398259/id_rsa Username:docker}
	I1101 09:07:31.987968  169856 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:07:32.001931  169856 node_ready.go:35] waiting up to 6m0s for node "test-preload-398259" to be "Ready" ...
	I1101 09:07:32.039803  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:07:32.059305  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:32.103997  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:32.104045  169856 retry.go:31] will retry after 326.869362ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:07:32.120927  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:32.120972  169856 retry.go:31] will retry after 237.03307ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:32.358422  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:32.414234  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:32.414273  169856 retry.go:31] will retry after 268.036048ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:32.431506  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:07:32.488128  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:32.488163  169856 retry.go:31] will retry after 464.262942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:32.683268  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:32.740397  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:32.740433  169856 retry.go:31] will retry after 389.349488ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:32.952728  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:07:33.010489  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:33.010536  169856 retry.go:31] will retry after 505.993678ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:33.130842  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:33.185032  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:33.185060  169856 retry.go:31] will retry after 569.686637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:33.516751  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:07:33.573891  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:33.573920  169856 retry.go:31] will retry after 649.976315ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:33.755186  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:33.813562  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:33.813602  169856 retry.go:31] will retry after 1.201195402s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:07:34.003513  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:07:34.224854  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:07:34.282001  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:34.282033  169856 retry.go:31] will retry after 1.62604091s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:35.014977  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:35.070876  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:35.070925  169856 retry.go:31] will retry after 1.955150497s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:35.908272  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:07:35.965132  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:35.965166  169856 retry.go:31] will retry after 1.145359452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:07:36.502766  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:07:37.027284  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:37.085347  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:37.085379  169856 retry.go:31] will retry after 1.834341161s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:37.111629  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:07:37.169028  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:37.169060  169856 retry.go:31] will retry after 2.824480099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:07:38.503498  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:07:38.920043  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:38.976952  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:38.976992  169856 retry.go:31] will retry after 6.326184229s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:39.994208  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:07:40.053350  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:40.053380  169856 retry.go:31] will retry after 3.165551359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:07:40.503545  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:07:43.002892  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:07:43.219209  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:07:43.275805  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:43.275833  169856 retry.go:31] will retry after 8.788910692s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:45.303334  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:45.360611  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:45.360647  169856 retry.go:31] will retry after 4.530632619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:07:45.503308  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:07:48.003158  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:07:49.892308  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:49.949937  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:49.949976  169856 retry.go:31] will retry after 5.985181439s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:07:50.503522  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:07:52.064986  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:07:52.120651  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:52.120685  169856 retry.go:31] will retry after 10.924312634s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:07:53.002659  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:07:55.502766  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:07:55.936375  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:07:55.993387  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:07:55.993425  169856 retry.go:31] will retry after 12.747359897s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:07:57.503016  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:00.002675  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:02.003026  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:08:03.045449  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:08:03.102161  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:08:03.102196  169856 retry.go:31] will retry after 18.789725991s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:08:04.003350  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:06.502604  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:08.502940  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:08:08.741355  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:08:08.798604  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:08:08.798636  169856 retry.go:31] will retry after 27.473065748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:08:11.002695  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:13.003157  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:15.003463  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:17.003547  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:19.502544  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:21.503549  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:08:21.892328  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:08:21.949295  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:08:21.949331  169856 retry.go:31] will retry after 30.093713452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:08:24.002847  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:26.502667  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:28.503560  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:31.003108  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:33.502661  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:35.502782  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:08:36.272311  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:08:36.326324  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:08:36.326357  169856 retry.go:31] will retry after 36.515182494s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:08:37.503665  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:40.002549  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:42.002904  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:44.003432  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:46.502577  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:48.503557  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:51.002577  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:08:52.044064  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:08:52.099788  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:08:52.099824  169856 retry.go:31] will retry after 17.28065807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:08:53.502627  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:55.503283  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:08:58.002848  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:00.502791  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:03.003198  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:05.502772  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:07.503386  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:09:09.381371  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1101 09:09:09.438826  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:09:09.438962  169856 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1101 09:09:09.503535  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:12.002568  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:09:12.842260  169856 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1101 09:09:12.896883  169856 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:09:12.897013  169856 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 09:09:12.898894  169856 out.go:179] * Enabled addons: 
	I1101 09:09:12.900429  169856 addons.go:515] duration metric: took 1m41.020853674s for enable addons: enabled=[]
	W1101 09:09:14.002688  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:16.502619  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:18.503391  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:21.002954  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:23.502849  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:25.503054  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:28.003201  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:30.502628  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:32.503011  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:35.002676  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:37.003350  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:39.503343  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:42.002850  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:44.502603  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:46.502822  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:49.002625  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:51.502740  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:53.503520  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:56.002800  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:09:58.003541  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:00.502804  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:03.002704  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:05.003342  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:07.502816  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:10.002805  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:12.003525  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:14.503301  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:17.002989  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:19.502707  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:21.503577  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:24.003223  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:26.502749  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:28.503361  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:31.002953  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:33.502758  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:35.503454  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:38.003140  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:40.502767  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:43.002731  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:45.003360  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:47.503416  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:50.002946  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:52.502749  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:55.002631  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:57.003511  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:10:59.503454  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:02.002954  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:04.502945  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:07.002607  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:09.003483  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:11.502915  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:14.002830  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:16.003565  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:18.503330  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:21.002984  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:23.502898  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:25.503537  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:28.003279  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:30.003596  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:32.502910  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:34.503511  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:37.002573  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:39.002749  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:41.502648  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:43.503457  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:46.002582  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:48.002825  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:50.502739  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:52.502831  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:54.503534  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:57.003627  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:11:59.502669  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:02.002860  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:04.003305  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:06.003471  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:08.502696  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:11.002566  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:13.002955  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:15.502731  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:17.503580  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:20.002761  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:22.002919  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:24.003390  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:26.502567  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:28.502763  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:31.002683  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:33.003279  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:35.503220  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:37.503512  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:40.003584  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:42.502634  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:44.502727  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:47.002667  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:49.002973  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:51.502945  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:54.003579  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:56.503518  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:12:59.002746  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:01.503591  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:04.003334  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:06.503479  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:09.002719  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:11.003538  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:13.503462  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:16.003637  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:18.502719  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:21.002643  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:23.003206  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:25.003575  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:27.502672  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:29.503501  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	W1101 09:13:31.503582  169856 node_ready.go:55] error getting node "test-preload-398259" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/test-preload-398259": dial tcp 192.168.76.2:8443: connect: connection refused
	I1101 09:13:32.002410  169856 node_ready.go:38] duration metric: took 6m0.000402692s for node "test-preload-398259" to be "Ready" ...
	I1101 09:13:32.004663  169856 out.go:203] 
	W1101 09:13:32.006128  169856 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1101 09:13:32.006145  169856 out.go:285] * 
	W1101 09:13:32.007793  169856 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:13:32.009521  169856 out.go:203] 
	
	
	==> CRI-O <==
	Nov 01 09:08:57 test-preload-398259 crio[549]: time="2025-11-01T09:08:57.281683113Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/f4d96ad871cda418860b7a2b83b7f57babffda10dde3f3b5f86c72e10c43ae23/merged\": directory not empty" id=365b3e90-d1f0-429d-b3f3-e2e465d8e8e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:09:40 test-preload-398259 crio[549]: time="2025-11-01T09:09:40.529352029Z" level=info msg="createCtr: deleting container 41f43548239dc23aaa9f8dad2bb0311226ee4a05de426e02e50586197821d78a from storage" id=b7cd70dd-2ee5-4424-83b3-6c93f5e0c792 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:09:40 test-preload-398259 crio[549]: time="2025-11-01T09:09:40.529651593Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/bccee7286be883e900f5058c9d20bc382440b49fba0eb595523d1f4c8fd83c85/merged\": directory not empty" id=b7cd70dd-2ee5-4424-83b3-6c93f5e0c792 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:09:40 test-preload-398259 crio[549]: time="2025-11-01T09:09:40.53048063Z" level=info msg="createCtr: deleting container ca172015a570d97225182d55ba09396d2377c9865c9c0e3868bf1b271f5e8179 from storage" id=b872420f-4cf5-4e2f-b6b0-c8f24b69083d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:09:40 test-preload-398259 crio[549]: time="2025-11-01T09:09:40.530563431Z" level=info msg="createCtr: deleting container 88ca85a126108eacf577f08c0e1a5dda523ec36c9f480b1d3b68b842c6e5dd06 from storage" id=70191d54-d2be-48f7-8d49-73185dbb9841 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:09:40 test-preload-398259 crio[549]: time="2025-11-01T09:09:40.530573432Z" level=info msg="createCtr: deleting container 7ee22595fa1de94db494ff675b9e1f24e6bd8368960dd84554fc82a5ef3a1270 from storage" id=365b3e90-d1f0-429d-b3f3-e2e465d8e8e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:09:40 test-preload-398259 crio[549]: time="2025-11-01T09:09:40.530818844Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/1381ce8ee599676173d13bdb9ffdbc299de154534cbe02a9e5adb0f9c618c0ee/merged\": directory not empty" id=b872420f-4cf5-4e2f-b6b0-c8f24b69083d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:09:40 test-preload-398259 crio[549]: time="2025-11-01T09:09:40.531140327Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/f4d96ad871cda418860b7a2b83b7f57babffda10dde3f3b5f86c72e10c43ae23/merged\": directory not empty" id=365b3e90-d1f0-429d-b3f3-e2e465d8e8e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:09:40 test-preload-398259 crio[549]: time="2025-11-01T09:09:40.531334158Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/5a25c5f427ea28b3a85cbe83280748e47a0156887fed83cdaa3df7d7793ce2a4/merged\": directory not empty" id=70191d54-d2be-48f7-8d49-73185dbb9841 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:10:45 test-preload-398259 crio[549]: time="2025-11-01T09:10:45.403769513Z" level=info msg="createCtr: deleting container 41f43548239dc23aaa9f8dad2bb0311226ee4a05de426e02e50586197821d78a from storage" id=b7cd70dd-2ee5-4424-83b3-6c93f5e0c792 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:10:45 test-preload-398259 crio[549]: time="2025-11-01T09:10:45.40409454Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/bccee7286be883e900f5058c9d20bc382440b49fba0eb595523d1f4c8fd83c85/merged\": directory not empty" id=b7cd70dd-2ee5-4424-83b3-6c93f5e0c792 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:10:45 test-preload-398259 crio[549]: time="2025-11-01T09:10:45.404934966Z" level=info msg="createCtr: deleting container ca172015a570d97225182d55ba09396d2377c9865c9c0e3868bf1b271f5e8179 from storage" id=b872420f-4cf5-4e2f-b6b0-c8f24b69083d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:10:45 test-preload-398259 crio[549]: time="2025-11-01T09:10:45.40503705Z" level=info msg="createCtr: deleting container 7ee22595fa1de94db494ff675b9e1f24e6bd8368960dd84554fc82a5ef3a1270 from storage" id=365b3e90-d1f0-429d-b3f3-e2e465d8e8e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:10:45 test-preload-398259 crio[549]: time="2025-11-01T09:10:45.40496937Z" level=info msg="createCtr: deleting container 88ca85a126108eacf577f08c0e1a5dda523ec36c9f480b1d3b68b842c6e5dd06 from storage" id=70191d54-d2be-48f7-8d49-73185dbb9841 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:10:45 test-preload-398259 crio[549]: time="2025-11-01T09:10:45.405384856Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/f4d96ad871cda418860b7a2b83b7f57babffda10dde3f3b5f86c72e10c43ae23/merged\": directory not empty" id=365b3e90-d1f0-429d-b3f3-e2e465d8e8e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:10:45 test-preload-398259 crio[549]: time="2025-11-01T09:10:45.405536765Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/5a25c5f427ea28b3a85cbe83280748e47a0156887fed83cdaa3df7d7793ce2a4/merged\": directory not empty" id=70191d54-d2be-48f7-8d49-73185dbb9841 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:10:45 test-preload-398259 crio[549]: time="2025-11-01T09:10:45.405708057Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/1381ce8ee599676173d13bdb9ffdbc299de154534cbe02a9e5adb0f9c618c0ee/merged\": directory not empty" id=b872420f-4cf5-4e2f-b6b0-c8f24b69083d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:12:22 test-preload-398259 crio[549]: time="2025-11-01T09:12:22.71439627Z" level=info msg="createCtr: deleting container 41f43548239dc23aaa9f8dad2bb0311226ee4a05de426e02e50586197821d78a from storage" id=b7cd70dd-2ee5-4424-83b3-6c93f5e0c792 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:12:22 test-preload-398259 crio[549]: time="2025-11-01T09:12:22.714672964Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/bccee7286be883e900f5058c9d20bc382440b49fba0eb595523d1f4c8fd83c85/merged\": directory not empty" id=b7cd70dd-2ee5-4424-83b3-6c93f5e0c792 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:12:22 test-preload-398259 crio[549]: time="2025-11-01T09:12:22.715525983Z" level=info msg="createCtr: deleting container 88ca85a126108eacf577f08c0e1a5dda523ec36c9f480b1d3b68b842c6e5dd06 from storage" id=70191d54-d2be-48f7-8d49-73185dbb9841 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:12:22 test-preload-398259 crio[549]: time="2025-11-01T09:12:22.715600274Z" level=info msg="createCtr: deleting container ca172015a570d97225182d55ba09396d2377c9865c9c0e3868bf1b271f5e8179 from storage" id=b872420f-4cf5-4e2f-b6b0-c8f24b69083d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:12:22 test-preload-398259 crio[549]: time="2025-11-01T09:12:22.715644771Z" level=info msg="createCtr: deleting container 7ee22595fa1de94db494ff675b9e1f24e6bd8368960dd84554fc82a5ef3a1270 from storage" id=365b3e90-d1f0-429d-b3f3-e2e465d8e8e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:12:22 test-preload-398259 crio[549]: time="2025-11-01T09:12:22.715915499Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/f4d96ad871cda418860b7a2b83b7f57babffda10dde3f3b5f86c72e10c43ae23/merged\": directory not empty" id=365b3e90-d1f0-429d-b3f3-e2e465d8e8e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:12:22 test-preload-398259 crio[549]: time="2025-11-01T09:12:22.716091901Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/1381ce8ee599676173d13bdb9ffdbc299de154534cbe02a9e5adb0f9c618c0ee/merged\": directory not empty" id=b872420f-4cf5-4e2f-b6b0-c8f24b69083d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:12:22 test-preload-398259 crio[549]: time="2025-11-01T09:12:22.716263164Z" level=error msg="Failed to cleanup (probably retrying): failed to cleanup container storage: replacing mount point \"/var/lib/containers/storage/overlay/5a25c5f427ea28b3a85cbe83280748e47a0156887fed83cdaa3df7d7793ce2a4/merged\": directory not empty" id=70191d54-d2be-48f7-8d49-73185dbb9841 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.047726] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[Nov 1 08:38] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.215674] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.047939] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000023] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023915] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023867] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023880] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000033] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023884] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +4.031524] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +8.255123] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	
	
	==> kernel <==
	 09:13:33 up 56 min,  0 user,  load average: 0.05, 0.34, 0.62
	Linux test-preload-398259 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Nov 01 09:13:01 test-preload-398259 kubelet[712]: E1101 09:13:01.280470     712 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-398259\" not found"
	Nov 01 09:13:01 test-preload-398259 kubelet[712]: W1101 09:13:01.413808     712 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Nov 01 09:13:01 test-preload-398259 kubelet[712]: E1101 09:13:01.413917     712 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	Nov 01 09:13:05 test-preload-398259 kubelet[712]: E1101 09:13:05.902119     712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-398259?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Nov 01 09:13:06 test-preload-398259 kubelet[712]: I1101 09:13:06.066952     712 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-398259"
	Nov 01 09:13:06 test-preload-398259 kubelet[712]: E1101 09:13:06.067344     712 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-398259"
	Nov 01 09:13:06 test-preload-398259 kubelet[712]: E1101 09:13:06.500574     712 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{test-preload-398259.1873d6cc159345c3  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:test-preload-398259,UID:test-preload-398259,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node test-preload-398259 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:test-preload-398259,},FirstTimestamp:2025-11-01 09:07:31.255641539 +0000 UTC m=+0.083864452,LastTimestamp:2025-11-01 09:07:31.255641539 +0000 UTC m=+0.083864452,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance
:test-preload-398259,}"
	Nov 01 09:13:11 test-preload-398259 kubelet[712]: E1101 09:13:11.281337     712 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-398259\" not found"
	Nov 01 09:13:12 test-preload-398259 kubelet[712]: E1101 09:13:12.903114     712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-398259?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Nov 01 09:13:13 test-preload-398259 kubelet[712]: I1101 09:13:13.068899     712 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-398259"
	Nov 01 09:13:13 test-preload-398259 kubelet[712]: E1101 09:13:13.069330     712 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-398259"
	Nov 01 09:13:16 test-preload-398259 kubelet[712]: E1101 09:13:16.501981     712 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{test-preload-398259.1873d6cc159345c3  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:test-preload-398259,UID:test-preload-398259,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node test-preload-398259 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:test-preload-398259,},FirstTimestamp:2025-11-01 09:07:31.255641539 +0000 UTC m=+0.083864452,LastTimestamp:2025-11-01 09:07:31.255641539 +0000 UTC m=+0.083864452,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance
:test-preload-398259,}"
	Nov 01 09:13:19 test-preload-398259 kubelet[712]: E1101 09:13:19.904171     712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-398259?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Nov 01 09:13:20 test-preload-398259 kubelet[712]: I1101 09:13:20.071249     712 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-398259"
	Nov 01 09:13:20 test-preload-398259 kubelet[712]: E1101 09:13:20.071690     712 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-398259"
	Nov 01 09:13:21 test-preload-398259 kubelet[712]: E1101 09:13:21.282130     712 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-398259\" not found"
	Nov 01 09:13:24 test-preload-398259 kubelet[712]: W1101 09:13:24.167806     712 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Nov 01 09:13:24 test-preload-398259 kubelet[712]: E1101 09:13:24.167913     712 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	Nov 01 09:13:26 test-preload-398259 kubelet[712]: W1101 09:13:26.463667     712 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dtest-preload-398259&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Nov 01 09:13:26 test-preload-398259 kubelet[712]: E1101 09:13:26.463753     712 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dtest-preload-398259&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError"
	Nov 01 09:13:26 test-preload-398259 kubelet[712]: E1101 09:13:26.503570     712 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{test-preload-398259.1873d6cc159345c3  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:test-preload-398259,UID:test-preload-398259,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node test-preload-398259 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:test-preload-398259,},FirstTimestamp:2025-11-01 09:07:31.255641539 +0000 UTC m=+0.083864452,LastTimestamp:2025-11-01 09:07:31.255641539 +0000 UTC m=+0.083864452,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance
:test-preload-398259,}"
	Nov 01 09:13:26 test-preload-398259 kubelet[712]: E1101 09:13:26.905159     712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/test-preload-398259?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Nov 01 09:13:27 test-preload-398259 kubelet[712]: I1101 09:13:27.073401     712 kubelet_node_status.go:76] "Attempting to register node" node="test-preload-398259"
	Nov 01 09:13:27 test-preload-398259 kubelet[712]: E1101 09:13:27.073811     712 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="test-preload-398259"
	Nov 01 09:13:31 test-preload-398259 kubelet[712]: E1101 09:13:31.283215     712 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"test-preload-398259\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-398259 -n test-preload-398259
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-398259 -n test-preload-398259: exit status 2 (309.594633ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "test-preload-398259" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-398259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-398259
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-398259: (2.408430261s)
--- FAIL: TestPreload (425.87s)

                                                
                                    
x
+
TestPause/serial/Pause (6.13s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-349394 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-349394 --alsologtostderr -v=5: exit status 80 (1.646264103s)

                                                
                                                
-- stdout --
	* Pausing node pause-349394 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:16:47.344284  204982 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:16:47.344579  204982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:16:47.344590  204982 out.go:374] Setting ErrFile to fd 2...
	I1101 09:16:47.344594  204982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:16:47.344789  204982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:16:47.345085  204982 out.go:368] Setting JSON to false
	I1101 09:16:47.345119  204982 mustload.go:66] Loading cluster: pause-349394
	I1101 09:16:47.345471  204982 config.go:182] Loaded profile config "pause-349394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:16:47.345834  204982 cli_runner.go:164] Run: docker container inspect pause-349394 --format={{.State.Status}}
	I1101 09:16:47.365713  204982 host.go:66] Checking if "pause-349394" exists ...
	I1101 09:16:47.366052  204982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:16:47.444046  204982 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 09:16:47.430189209 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:16:47.444911  204982 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-349394 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 09:16:47.450685  204982 out.go:179] * Pausing node pause-349394 ... 
	I1101 09:16:47.452225  204982 host.go:66] Checking if "pause-349394" exists ...
	I1101 09:16:47.452488  204982 ssh_runner.go:195] Run: systemctl --version
	I1101 09:16:47.452525  204982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-349394
	I1101 09:16:47.474218  204982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/pause-349394/id_rsa Username:docker}
	I1101 09:16:47.578159  204982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:16:47.593659  204982 pause.go:52] kubelet running: true
	I1101 09:16:47.593881  204982 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:16:47.757236  204982 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:16:47.757348  204982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:16:47.839743  204982 cri.go:89] found id: "33344c58eff1306b543a26dbf747e4baea69a19b868765195de20140ce386cf1"
	I1101 09:16:47.839768  204982 cri.go:89] found id: "4a8d5a2e7c83c5b4e8afac776468c5f7daaf2755b7a303482ad7dc0d21622766"
	I1101 09:16:47.839774  204982 cri.go:89] found id: "68129655f11eb2fbbff91f1a063a3abde455c30c6d01387be8b1f621cd09a76c"
	I1101 09:16:47.839779  204982 cri.go:89] found id: "8dfd3a6fbe5192e33887de6a4599d920604aacc1bcb4ae7456344f2139e27f32"
	I1101 09:16:47.839785  204982 cri.go:89] found id: "7bdecef247777d9b2c200252ec3f7f1536d2d34b567c29c214d7d9eac204c90c"
	I1101 09:16:47.839949  204982 cri.go:89] found id: "066ac59efc73e772486d4d3af8783931652a5d9a99a996276f490885d836b19b"
	I1101 09:16:47.839955  204982 cri.go:89] found id: "b5819d2b49cc99ce150a3c219c09f4887b9aa916a34d826d6b47a31ec0d67fc2"
	I1101 09:16:47.839959  204982 cri.go:89] found id: ""
	I1101 09:16:47.840108  204982 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:16:47.862309  204982 retry.go:31] will retry after 307.343011ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:16:47Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:16:48.169880  204982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:16:48.184166  204982 pause.go:52] kubelet running: false
	I1101 09:16:48.184244  204982 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:16:48.317497  204982 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:16:48.317610  204982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:16:48.403893  204982 cri.go:89] found id: "33344c58eff1306b543a26dbf747e4baea69a19b868765195de20140ce386cf1"
	I1101 09:16:48.403937  204982 cri.go:89] found id: "4a8d5a2e7c83c5b4e8afac776468c5f7daaf2755b7a303482ad7dc0d21622766"
	I1101 09:16:48.403943  204982 cri.go:89] found id: "68129655f11eb2fbbff91f1a063a3abde455c30c6d01387be8b1f621cd09a76c"
	I1101 09:16:48.403949  204982 cri.go:89] found id: "8dfd3a6fbe5192e33887de6a4599d920604aacc1bcb4ae7456344f2139e27f32"
	I1101 09:16:48.403953  204982 cri.go:89] found id: "7bdecef247777d9b2c200252ec3f7f1536d2d34b567c29c214d7d9eac204c90c"
	I1101 09:16:48.403958  204982 cri.go:89] found id: "066ac59efc73e772486d4d3af8783931652a5d9a99a996276f490885d836b19b"
	I1101 09:16:48.403961  204982 cri.go:89] found id: "b5819d2b49cc99ce150a3c219c09f4887b9aa916a34d826d6b47a31ec0d67fc2"
	I1101 09:16:48.403965  204982 cri.go:89] found id: ""
	I1101 09:16:48.404013  204982 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:16:48.418661  204982 retry.go:31] will retry after 261.392658ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:16:48Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:16:48.681309  204982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:16:48.695955  204982 pause.go:52] kubelet running: false
	I1101 09:16:48.696040  204982 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:16:48.812119  204982 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:16:48.812203  204982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:16:48.893158  204982 cri.go:89] found id: "33344c58eff1306b543a26dbf747e4baea69a19b868765195de20140ce386cf1"
	I1101 09:16:48.893182  204982 cri.go:89] found id: "4a8d5a2e7c83c5b4e8afac776468c5f7daaf2755b7a303482ad7dc0d21622766"
	I1101 09:16:48.893188  204982 cri.go:89] found id: "68129655f11eb2fbbff91f1a063a3abde455c30c6d01387be8b1f621cd09a76c"
	I1101 09:16:48.893192  204982 cri.go:89] found id: "8dfd3a6fbe5192e33887de6a4599d920604aacc1bcb4ae7456344f2139e27f32"
	I1101 09:16:48.893196  204982 cri.go:89] found id: "7bdecef247777d9b2c200252ec3f7f1536d2d34b567c29c214d7d9eac204c90c"
	I1101 09:16:48.893200  204982 cri.go:89] found id: "066ac59efc73e772486d4d3af8783931652a5d9a99a996276f490885d836b19b"
	I1101 09:16:48.893203  204982 cri.go:89] found id: "b5819d2b49cc99ce150a3c219c09f4887b9aa916a34d826d6b47a31ec0d67fc2"
	I1101 09:16:48.893206  204982 cri.go:89] found id: ""
	I1101 09:16:48.893256  204982 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:16:48.909161  204982 out.go:203] 
	W1101 09:16:48.910455  204982 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:16:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:16:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:16:48.910476  204982 out.go:285] * 
	* 
	W1101 09:16:48.916047  204982 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:16:48.917395  204982 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-349394 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-349394
helpers_test.go:243: (dbg) docker inspect pause-349394:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1cd90b3ac385f648c1fc7bb6e8d90bc206c045aad1ebfa632e174b7e3eda34ae",
	        "Created": "2025-11-01T09:15:34.591741435Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182185,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:15:34.654337705Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/1cd90b3ac385f648c1fc7bb6e8d90bc206c045aad1ebfa632e174b7e3eda34ae/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1cd90b3ac385f648c1fc7bb6e8d90bc206c045aad1ebfa632e174b7e3eda34ae/hostname",
	        "HostsPath": "/var/lib/docker/containers/1cd90b3ac385f648c1fc7bb6e8d90bc206c045aad1ebfa632e174b7e3eda34ae/hosts",
	        "LogPath": "/var/lib/docker/containers/1cd90b3ac385f648c1fc7bb6e8d90bc206c045aad1ebfa632e174b7e3eda34ae/1cd90b3ac385f648c1fc7bb6e8d90bc206c045aad1ebfa632e174b7e3eda34ae-json.log",
	        "Name": "/pause-349394",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-349394:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-349394",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1cd90b3ac385f648c1fc7bb6e8d90bc206c045aad1ebfa632e174b7e3eda34ae",
	                "LowerDir": "/var/lib/docker/overlay2/e87cd1ccda12b02ec2c3212f4a3e9388801e43271e77e49f60a30bfb41af98bd-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e87cd1ccda12b02ec2c3212f4a3e9388801e43271e77e49f60a30bfb41af98bd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e87cd1ccda12b02ec2c3212f4a3e9388801e43271e77e49f60a30bfb41af98bd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e87cd1ccda12b02ec2c3212f4a3e9388801e43271e77e49f60a30bfb41af98bd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-349394",
	                "Source": "/var/lib/docker/volumes/pause-349394/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-349394",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-349394",
	                "name.minikube.sigs.k8s.io": "pause-349394",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cfd101437d26ffed023d8b13a54c311b1a98577138b499c29d418aa0d918bd0d",
	            "SandboxKey": "/var/run/docker/netns/cfd101437d26",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32988"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32989"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32992"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32990"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32991"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-349394": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:ae:90:ee:4f:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9f7bf95c63a9e9ee47a2f0e9bf18e978b7d45c4471af7b29d8929292f33fa545",
	                    "EndpointID": "01fad5493f0d7dac4650eec0ee21d8bb702942e723a3ba6861de1352e02d2e37",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-349394",
	                        "1cd90b3ac385"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-349394 -n pause-349394
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-349394 -n pause-349394: exit status 2 (406.665665ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-349394 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-349394 logs -n 25: (1.349889045s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p insufficient-storage-756482 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                                                                                                          │ insufficient-storage-756482 │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │                     │
	│ delete  │ -p insufficient-storage-756482                                                                                                                                                                                            │ insufficient-storage-756482 │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │ 01 Nov 25 09:15 UTC │
	│ start   │ -p NoKubernetes-413481 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                             │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │                     │
	│ start   │ -p offline-crio-339605 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                                                                                                         │ offline-crio-339605         │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │ 01 Nov 25 09:16 UTC │
	│ start   │ -p force-systemd-env-363365 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-363365    │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │ 01 Nov 25 09:15 UTC │
	│ start   │ -p pause-349394 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-349394                │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │ 01 Nov 25 09:16 UTC │
	│ start   │ -p NoKubernetes-413481 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                     │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │ 01 Nov 25 09:15 UTC │
	│ delete  │ -p force-systemd-env-363365                                                                                                                                                                                               │ force-systemd-env-363365    │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │ 01 Nov 25 09:15 UTC │
	│ start   │ -p NoKubernetes-413481 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                     │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │ 01 Nov 25 09:16 UTC │
	│ start   │ -p force-systemd-flag-773418 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-773418   │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │ 01 Nov 25 09:16 UTC │
	│ delete  │ -p NoKubernetes-413481                                                                                                                                                                                                    │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ start   │ -p NoKubernetes-413481 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                     │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ ssh     │ -p NoKubernetes-413481 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │                     │
	│ ssh     │ force-systemd-flag-773418 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-773418   │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ delete  │ -p force-systemd-flag-773418                                                                                                                                                                                              │ force-systemd-flag-773418   │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ start   │ -p cert-expiration-303094 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-303094      │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ stop    │ -p NoKubernetes-413481                                                                                                                                                                                                    │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ start   │ -p NoKubernetes-413481 --driver=docker  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ ssh     │ -p NoKubernetes-413481 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │                     │
	│ delete  │ -p NoKubernetes-413481                                                                                                                                                                                                    │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ start   │ -p cert-options-403136 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-403136         │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │                     │
	│ start   │ -p pause-349394 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-349394                │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ delete  │ -p offline-crio-339605                                                                                                                                                                                                    │ offline-crio-339605         │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ pause   │ -p pause-349394 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-349394                │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │                     │
	│ start   │ -p stopped-upgrade-434419 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-434419      │ jenkins │ v1.32.0 │ 01 Nov 25 09:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:16:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:16:48.243116  205345 out.go:296] Setting OutFile to fd 1 ...
	I1101 09:16:48.243294  205345 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 09:16:48.243299  205345 out.go:309] Setting ErrFile to fd 2...
	I1101 09:16:48.243305  205345 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 09:16:48.243641  205345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:16:48.244397  205345 out.go:303] Setting JSON to false
	I1101 09:16:48.245800  205345 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3556,"bootTime":1761985052,"procs":277,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:16:48.245930  205345 start.go:138] virtualization: kvm guest
	I1101 09:16:48.252053  205345 out.go:177] * [stopped-upgrade-434419] minikube v1.32.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:16:48.253701  205345 out.go:177]   - MINIKUBE_LOCATION=21835
	I1101 09:16:48.253773  205345 notify.go:220] Checking for updates...
	I1101 09:16:48.255073  205345 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:16:48.256407  205345 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:16:48.257715  205345 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:16:48.259015  205345 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:16:48.260204  205345 out.go:177]   - KUBECONFIG=/tmp/legacy_kubeconfig951319388
	I1101 09:16:48.262556  205345 config.go:182] Loaded profile config "cert-expiration-303094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:16:48.262708  205345 config.go:182] Loaded profile config "cert-options-403136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:16:48.262937  205345 config.go:182] Loaded profile config "pause-349394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:16:48.263061  205345 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 09:16:48.291539  205345 docker.go:122] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:16:48.291698  205345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:16:48.322420  205345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/last_update_check: {Name:mk42d01780494c6f8fcce33eafe5b40e039516d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:16:48.324441  205345 out.go:177] * minikube 1.37.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.37.0
	I1101 09:16:48.326236  205345 out.go:177] * To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	I1101 09:16:48.359071  205345 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 09:16:48.347699894 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:16:48.359190  205345 docker.go:295] overlay module found
	I1101 09:16:48.361812  205345 out.go:177] * Using the docker driver based on user configuration
	I1101 09:16:48.363093  205345 start.go:298] selected driver: docker
	I1101 09:16:48.363103  205345 start.go:902] validating driver "docker" against <nil>
	I1101 09:16:48.363115  205345 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:16:48.363776  205345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:16:48.433535  205345 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 09:16:48.422037488 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:16:48.433707  205345 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1101 09:16:48.433985  205345 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:16:48.435618  205345 out.go:177] * Using Docker driver with root privileges
	I1101 09:16:48.436827  205345 cni.go:84] Creating CNI manager for ""
	I1101 09:16:48.436841  205345 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:16:48.436853  205345 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:16:48.436884  205345 start_flags.go:323] config:
	{Name:stopped-upgrade-434419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-434419 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 09:16:48.438357  205345 out.go:177] * Starting control plane node stopped-upgrade-434419 in cluster stopped-upgrade-434419
	I1101 09:16:48.439422  205345 cache.go:121] Beginning downloading kic base image for docker with crio
	I1101 09:16:48.441032  205345 out.go:177] * Pulling base image ...
	I1101 09:16:48.442201  205345 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 09:16:48.442309  205345 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1101 09:16:48.461549  205345 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1101 09:16:48.461757  205345 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1101 09:16:48.461785  205345 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1101 09:16:48.470081  205345 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1101 09:16:48.470113  205345 cache.go:56] Caching tarball of preloaded images
	I1101 09:16:48.470290  205345 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 09:16:48.472204  205345 out.go:177] * Downloading Kubernetes v1.28.3 preload ...
	I1101 09:16:48.065635  201023 out.go:252]   - Booting up control plane ...
	I1101 09:16:48.065786  201023 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:16:48.065906  201023 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:16:48.066652  201023 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:16:48.085356  201023 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:16:48.085492  201023 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:16:48.094202  201023 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:16:48.095284  201023 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:16:48.095592  201023 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:16:48.219267  201023 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:16:48.219412  201023 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:16:48.720092  201023 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.892644ms
	I1101 09:16:48.723103  201023 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:16:48.723247  201023 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8555/livez
	I1101 09:16:48.723368  201023 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:16:48.723468  201023 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.071193109Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.072038238Z" level=info msg="Conmon does support the --sync option"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.072055748Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.072074203Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.072889694Z" level=info msg="Conmon does support the --sync option"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.072923995Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.077271083Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.077298927Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.077897005Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.078321273Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.078372433Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.084270875Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.128496098Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-ffqwr Namespace:kube-system ID:743ccb28d9a050110a3139ca0fc0da50e044ac3031d3dfd8e5625ec3611bffe4 UID:921445fa-0791-4702-84d0-51ac75b88ec0 NetNS:/var/run/netns/ead8c5da-a367-467e-880c-399fb46259f1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001722c8}] Aliases:map[]}"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.128681993Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-ffqwr for CNI network kindnet (type=ptp)"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129261737Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129294155Z" level=info msg="Starting seccomp notifier watcher"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129397473Z" level=info msg="Create NRI interface"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129486907Z" level=info msg="built-in NRI default validator is disabled"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129497164Z" level=info msg="runtime interface created"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129506466Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129511591Z" level=info msg="runtime interface starting up..."
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129516581Z" level=info msg="starting plugins..."
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129528786Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129934813Z" level=info msg="No systemd watchdog enabled"
	Nov 01 09:16:44 pause-349394 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	33344c58eff13       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago       Running             coredns                   0                   743ccb28d9a05       coredns-66bc5c9577-ffqwr               kube-system
	4a8d5a2e7c83c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   54 seconds ago       Running             kindnet-cni               0                   3344e3a15cb58       kindnet-cnnft                          kube-system
	68129655f11eb       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   54 seconds ago       Running             kube-proxy                0                   3f64450f1713f       kube-proxy-4xbbh                       kube-system
	8dfd3a6fbe519       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Running             kube-scheduler            0                   76c88c107f5e2       kube-scheduler-pause-349394            kube-system
	7bdecef247777       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Running             etcd                      0                   c37230063c7bc       etcd-pause-349394                      kube-system
	066ac59efc73e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Running             kube-apiserver            0                   337348cb26d21       kube-apiserver-pause-349394            kube-system
	b5819d2b49cc9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   About a minute ago   Running             kube-controller-manager   0                   33a16c82552f4       kube-controller-manager-pause-349394   kube-system
	
	
	==> coredns [33344c58eff1306b543a26dbf747e4baea69a19b868765195de20140ce386cf1] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51259 - 21949 "HINFO IN 7162759984627093211.7381152140018459443. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.087294302s
	
	
	==> describe nodes <==
	Name:               pause-349394
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-349394
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=pause-349394
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_15_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:15:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-349394
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:16:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:16:37 +0000   Sat, 01 Nov 2025 09:15:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:16:37 +0000   Sat, 01 Nov 2025 09:15:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:16:37 +0000   Sat, 01 Nov 2025 09:15:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:16:37 +0000   Sat, 01 Nov 2025 09:16:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-349394
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                d38e44a7-c7da-40cd-89b5-aa20695b16e5
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-ffqwr                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     55s
	  kube-system                 etcd-pause-349394                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         60s
	  kube-system                 kindnet-cnnft                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-pause-349394             250m (3%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-pause-349394    200m (2%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-4xbbh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-pause-349394             100m (1%)     0 (0%)      0 (0%)           0 (0%)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 54s   kube-proxy       
	  Normal  Starting                 60s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s   kubelet          Node pause-349394 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s   kubelet          Node pause-349394 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s   kubelet          Node pause-349394 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s   node-controller  Node pause-349394 event: Registered Node pause-349394 in Controller
	  Normal  NodeReady                13s   kubelet          Node pause-349394 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.047726] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[Nov 1 08:38] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.215674] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.047939] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000023] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023915] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023867] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023880] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000033] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023884] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +4.031524] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +8.255123] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	
	
	==> etcd [7bdecef247777d9b2c200252ec3f7f1536d2d34b567c29c214d7d9eac204c90c] <==
	{"level":"warn","ts":"2025-11-01T09:15:47.074725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.082846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.089925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.097324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.106060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.113445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.124280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.133093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.144933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.162253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.166706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.174167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.186152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.257697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46010","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:16:02.687693Z","caller":"traceutil/trace.go:172","msg":"trace[2122532009] linearizableReadLoop","detail":"{readStateIndex:391; appliedIndex:391; }","duration":"225.80583ms","start":"2025-11-01T09:16:02.461860Z","end":"2025-11-01T09:16:02.687666Z","steps":["trace[2122532009] 'read index received'  (duration: 225.797115ms)","trace[2122532009] 'applied index is now lower than readState.Index'  (duration: 7.295µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:16:02.687833Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"225.940796ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:16:02.687924Z","caller":"traceutil/trace.go:172","msg":"trace[669194114] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:379; }","duration":"226.064392ms","start":"2025-11-01T09:16:02.461848Z","end":"2025-11-01T09:16:02.687913Z","steps":["trace[669194114] 'agreement among raft nodes before linearized reading'  (duration: 225.90722ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:16:02.688106Z","caller":"traceutil/trace.go:172","msg":"trace[1205053273] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"253.352379ms","start":"2025-11-01T09:16:02.434741Z","end":"2025-11-01T09:16:02.688094Z","steps":["trace[1205053273] 'process raft request'  (duration: 252.982167ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:16:02.887498Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.119655ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765875354749870 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-349394\" mod_revision:380 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-349394\" value_size:4706 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-349394\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T09:16:02.887611Z","caller":"traceutil/trace.go:172","msg":"trace[1223324963] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"187.186459ms","start":"2025-11-01T09:16:02.700409Z","end":"2025-11-01T09:16:02.887596Z","steps":["trace[1223324963] 'process raft request'  (duration: 52.09081ms)","trace[1223324963] 'compare'  (duration: 133.963105ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:16:37.918845Z","caller":"traceutil/trace.go:172","msg":"trace[638945511] linearizableReadLoop","detail":"{readStateIndex:416; appliedIndex:416; }","duration":"105.864934ms","start":"2025-11-01T09:16:37.812953Z","end":"2025-11-01T09:16:37.918818Z","steps":["trace[638945511] 'read index received'  (duration: 105.851397ms)","trace[638945511] 'applied index is now lower than readState.Index'  (duration: 12.213µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:16:37.918985Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.012712ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:16:37.919050Z","caller":"traceutil/trace.go:172","msg":"trace[1774635505] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:397; }","duration":"106.094646ms","start":"2025-11-01T09:16:37.812943Z","end":"2025-11-01T09:16:37.919037Z","steps":["trace[1774635505] 'agreement among raft nodes before linearized reading'  (duration: 105.951662ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:16:37.919090Z","caller":"traceutil/trace.go:172","msg":"trace[544685844] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"128.457606ms","start":"2025-11-01T09:16:37.790618Z","end":"2025-11-01T09:16:37.919075Z","steps":["trace[544685844] 'process raft request'  (duration: 128.228911ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:16:38.721263Z","caller":"traceutil/trace.go:172","msg":"trace[665718669] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"146.484164ms","start":"2025-11-01T09:16:38.574762Z","end":"2025-11-01T09:16:38.721246Z","steps":["trace[665718669] 'process raft request'  (duration: 146.38324ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:16:50 up 59 min,  0 user,  load average: 2.66, 1.18, 0.87
	Linux pause-349394 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4a8d5a2e7c83c5b4e8afac776468c5f7daaf2755b7a303482ad7dc0d21622766] <==
	I1101 09:15:56.229542       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:15:56.229836       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 09:15:56.230032       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:15:56.230052       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:15:56.230076       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:15:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:15:56.428980       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:15:56.429008       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:15:56.429025       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:15:56.449849       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 09:16:26.430536       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 09:16:26.430532       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 09:16:26.430532       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:16:26.451207       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1101 09:16:27.629751       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:16:27.629808       1 metrics.go:72] Registering metrics
	I1101 09:16:27.629914       1 controller.go:711] "Syncing nftables rules"
	I1101 09:16:36.429139       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:16:36.429220       1 main.go:301] handling current node
	I1101 09:16:46.434955       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:16:46.435016       1 main.go:301] handling current node
	
	
	==> kube-apiserver [066ac59efc73e772486d4d3af8783931652a5d9a99a996276f490885d836b19b] <==
	I1101 09:15:47.803641       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:15:47.803661       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:15:47.804095       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:15:47.804753       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 09:15:47.814561       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:15:47.817261       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:15:47.839566       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:15:47.844486       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:15:48.720601       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:15:48.728084       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:15:48.728110       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:15:49.454322       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:15:49.510599       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:15:49.612897       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:15:49.632265       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1101 09:15:49.633706       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:15:49.639795       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:15:49.756175       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:15:50.434128       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:15:50.457490       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:15:50.468919       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:15:55.508951       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1101 09:15:55.792110       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:15:55.796632       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:15:55.811340       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b5819d2b49cc99ce150a3c219c09f4887b9aa916a34d826d6b47a31ec0d67fc2] <==
	I1101 09:15:54.753282       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 09:15:54.753293       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:15:54.753612       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:15:54.753640       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:15:54.754441       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:15:54.754468       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:15:54.754554       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:15:54.754598       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:15:54.754616       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:15:54.754642       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-349394"
	I1101 09:15:54.754681       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 09:15:54.754874       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:15:54.755748       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:15:54.755781       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:15:54.757339       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:15:54.757357       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:15:54.757366       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:15:54.759211       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:15:54.760679       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:15:54.761646       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:15:54.761683       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:15:54.761696       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:15:54.764996       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:15:54.781502       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:16:39.759490       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [68129655f11eb2fbbff91f1a063a3abde455c30c6d01387be8b1f621cd09a76c] <==
	I1101 09:15:56.041049       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:15:56.130593       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:15:56.231917       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:15:56.231953       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1101 09:15:56.232058       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:15:56.253982       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:15:56.254041       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:15:56.261064       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:15:56.261527       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:15:56.261558       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:15:56.263255       1 config.go:200] "Starting service config controller"
	I1101 09:15:56.263271       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:15:56.263292       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:15:56.263290       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:15:56.263321       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:15:56.263329       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:15:56.263378       1 config.go:309] "Starting node config controller"
	I1101 09:15:56.263420       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:15:56.263431       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:15:56.363480       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:15:56.363480       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:15:56.364093       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8dfd3a6fbe5192e33887de6a4599d920604aacc1bcb4ae7456344f2139e27f32] <==
	E1101 09:15:47.888610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:15:47.888675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:15:47.888826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:15:47.888905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:15:47.888956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:15:47.889018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:15:47.889063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:15:47.889100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:15:47.889132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:15:47.889180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:15:48.729405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:15:48.768791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:15:48.777820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:15:48.804347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:15:48.825203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:15:48.825203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:15:48.911381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:15:48.919095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:15:48.919301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:15:49.003068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:15:49.062769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:15:49.095646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:15:49.178218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:15:49.186166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 09:15:52.172336       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:15:54 pause-349394 kubelet[1298]: I1101 09:15:54.768544    1298 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 09:15:54 pause-349394 kubelet[1298]: I1101 09:15:54.769318    1298 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 09:15:55 pause-349394 kubelet[1298]: I1101 09:15:55.577382    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35381609-683b-4b5c-b820-05f32d4ae095-xtables-lock\") pod \"kube-proxy-4xbbh\" (UID: \"35381609-683b-4b5c-b820-05f32d4ae095\") " pod="kube-system/kube-proxy-4xbbh"
	Nov 01 09:15:55 pause-349394 kubelet[1298]: I1101 09:15:55.577443    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/35381609-683b-4b5c-b820-05f32d4ae095-kube-proxy\") pod \"kube-proxy-4xbbh\" (UID: \"35381609-683b-4b5c-b820-05f32d4ae095\") " pod="kube-system/kube-proxy-4xbbh"
	Nov 01 09:15:55 pause-349394 kubelet[1298]: I1101 09:15:55.577648    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35381609-683b-4b5c-b820-05f32d4ae095-lib-modules\") pod \"kube-proxy-4xbbh\" (UID: \"35381609-683b-4b5c-b820-05f32d4ae095\") " pod="kube-system/kube-proxy-4xbbh"
	Nov 01 09:15:55 pause-349394 kubelet[1298]: I1101 09:15:55.577677    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th4fk\" (UniqueName: \"kubernetes.io/projected/35381609-683b-4b5c-b820-05f32d4ae095-kube-api-access-th4fk\") pod \"kube-proxy-4xbbh\" (UID: \"35381609-683b-4b5c-b820-05f32d4ae095\") " pod="kube-system/kube-proxy-4xbbh"
	Nov 01 09:15:55 pause-349394 kubelet[1298]: I1101 09:15:55.577734    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4ecaeeef-5e55-4e20-abe4-5294a0b245ee-cni-cfg\") pod \"kindnet-cnnft\" (UID: \"4ecaeeef-5e55-4e20-abe4-5294a0b245ee\") " pod="kube-system/kindnet-cnnft"
	Nov 01 09:15:55 pause-349394 kubelet[1298]: I1101 09:15:55.577808    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ecaeeef-5e55-4e20-abe4-5294a0b245ee-xtables-lock\") pod \"kindnet-cnnft\" (UID: \"4ecaeeef-5e55-4e20-abe4-5294a0b245ee\") " pod="kube-system/kindnet-cnnft"
	Nov 01 09:15:55 pause-349394 kubelet[1298]: I1101 09:15:55.577830    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ecaeeef-5e55-4e20-abe4-5294a0b245ee-lib-modules\") pod \"kindnet-cnnft\" (UID: \"4ecaeeef-5e55-4e20-abe4-5294a0b245ee\") " pod="kube-system/kindnet-cnnft"
	Nov 01 09:15:55 pause-349394 kubelet[1298]: I1101 09:15:55.577894    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhjsh\" (UniqueName: \"kubernetes.io/projected/4ecaeeef-5e55-4e20-abe4-5294a0b245ee-kube-api-access-fhjsh\") pod \"kindnet-cnnft\" (UID: \"4ecaeeef-5e55-4e20-abe4-5294a0b245ee\") " pod="kube-system/kindnet-cnnft"
	Nov 01 09:15:56 pause-349394 kubelet[1298]: I1101 09:15:56.475513    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4xbbh" podStartSLOduration=1.475492976 podStartE2EDuration="1.475492976s" podCreationTimestamp="2025-11-01 09:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:15:56.475478886 +0000 UTC m=+6.258195671" watchObservedRunningTime="2025-11-01 09:15:56.475492976 +0000 UTC m=+6.258209760"
	Nov 01 09:15:56 pause-349394 kubelet[1298]: I1101 09:15:56.502657    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cnnft" podStartSLOduration=1.502623277 podStartE2EDuration="1.502623277s" podCreationTimestamp="2025-11-01 09:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:15:56.489680272 +0000 UTC m=+6.272397062" watchObservedRunningTime="2025-11-01 09:15:56.502623277 +0000 UTC m=+6.285340062"
	Nov 01 09:16:37 pause-349394 kubelet[1298]: I1101 09:16:37.036223    1298 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 09:16:37 pause-349394 kubelet[1298]: I1101 09:16:37.186478    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/921445fa-0791-4702-84d0-51ac75b88ec0-config-volume\") pod \"coredns-66bc5c9577-ffqwr\" (UID: \"921445fa-0791-4702-84d0-51ac75b88ec0\") " pod="kube-system/coredns-66bc5c9577-ffqwr"
	Nov 01 09:16:37 pause-349394 kubelet[1298]: I1101 09:16:37.186547    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g79j\" (UniqueName: \"kubernetes.io/projected/921445fa-0791-4702-84d0-51ac75b88ec0-kube-api-access-8g79j\") pod \"coredns-66bc5c9577-ffqwr\" (UID: \"921445fa-0791-4702-84d0-51ac75b88ec0\") " pod="kube-system/coredns-66bc5c9577-ffqwr"
	Nov 01 09:16:38 pause-349394 kubelet[1298]: I1101 09:16:38.724516    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ffqwr" podStartSLOduration=43.72449306 podStartE2EDuration="43.72449306s" podCreationTimestamp="2025-11-01 09:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:16:38.722696423 +0000 UTC m=+48.505413230" watchObservedRunningTime="2025-11-01 09:16:38.72449306 +0000 UTC m=+48.507209845"
	Nov 01 09:16:42 pause-349394 kubelet[1298]: W1101 09:16:42.414578    1298 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 01 09:16:42 pause-349394 kubelet[1298]: E1101 09:16:42.415730    1298 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Nov 01 09:16:42 pause-349394 kubelet[1298]: E1101 09:16:42.415838    1298 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 01 09:16:42 pause-349394 kubelet[1298]: E1101 09:16:42.415879    1298 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 01 09:16:42 pause-349394 kubelet[1298]: E1101 09:16:42.415906    1298 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 01 09:16:47 pause-349394 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:16:47 pause-349394 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:16:47 pause-349394 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:16:47 pause-349394 systemd[1]: kubelet.service: Consumed 2.362s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-349394 -n pause-349394
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-349394 -n pause-349394: exit status 2 (461.311652ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-349394 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-349394
helpers_test.go:243: (dbg) docker inspect pause-349394:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1cd90b3ac385f648c1fc7bb6e8d90bc206c045aad1ebfa632e174b7e3eda34ae",
	        "Created": "2025-11-01T09:15:34.591741435Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182185,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:15:34.654337705Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/1cd90b3ac385f648c1fc7bb6e8d90bc206c045aad1ebfa632e174b7e3eda34ae/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1cd90b3ac385f648c1fc7bb6e8d90bc206c045aad1ebfa632e174b7e3eda34ae/hostname",
	        "HostsPath": "/var/lib/docker/containers/1cd90b3ac385f648c1fc7bb6e8d90bc206c045aad1ebfa632e174b7e3eda34ae/hosts",
	        "LogPath": "/var/lib/docker/containers/1cd90b3ac385f648c1fc7bb6e8d90bc206c045aad1ebfa632e174b7e3eda34ae/1cd90b3ac385f648c1fc7bb6e8d90bc206c045aad1ebfa632e174b7e3eda34ae-json.log",
	        "Name": "/pause-349394",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-349394:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-349394",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1cd90b3ac385f648c1fc7bb6e8d90bc206c045aad1ebfa632e174b7e3eda34ae",
	                "LowerDir": "/var/lib/docker/overlay2/e87cd1ccda12b02ec2c3212f4a3e9388801e43271e77e49f60a30bfb41af98bd-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e87cd1ccda12b02ec2c3212f4a3e9388801e43271e77e49f60a30bfb41af98bd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e87cd1ccda12b02ec2c3212f4a3e9388801e43271e77e49f60a30bfb41af98bd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e87cd1ccda12b02ec2c3212f4a3e9388801e43271e77e49f60a30bfb41af98bd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-349394",
	                "Source": "/var/lib/docker/volumes/pause-349394/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-349394",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-349394",
	                "name.minikube.sigs.k8s.io": "pause-349394",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cfd101437d26ffed023d8b13a54c311b1a98577138b499c29d418aa0d918bd0d",
	            "SandboxKey": "/var/run/docker/netns/cfd101437d26",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32988"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32989"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32992"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32990"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32991"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-349394": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:ae:90:ee:4f:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9f7bf95c63a9e9ee47a2f0e9bf18e978b7d45c4471af7b29d8929292f33fa545",
	                    "EndpointID": "01fad5493f0d7dac4650eec0ee21d8bb702942e723a3ba6861de1352e02d2e37",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-349394",
	                        "1cd90b3ac385"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-349394 -n pause-349394
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-349394 -n pause-349394: exit status 2 (439.78742ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-349394 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-349394 logs -n 25: (1.178866456s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p insufficient-storage-756482 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                                                                                                          │ insufficient-storage-756482 │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │                     │
	│ delete  │ -p insufficient-storage-756482                                                                                                                                                                                            │ insufficient-storage-756482 │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │ 01 Nov 25 09:15 UTC │
	│ start   │ -p NoKubernetes-413481 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                             │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │                     │
	│ start   │ -p offline-crio-339605 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                                                                                                         │ offline-crio-339605         │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │ 01 Nov 25 09:16 UTC │
	│ start   │ -p force-systemd-env-363365 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-363365    │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │ 01 Nov 25 09:15 UTC │
	│ start   │ -p pause-349394 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-349394                │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │ 01 Nov 25 09:16 UTC │
	│ start   │ -p NoKubernetes-413481 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                     │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │ 01 Nov 25 09:15 UTC │
	│ delete  │ -p force-systemd-env-363365                                                                                                                                                                                               │ force-systemd-env-363365    │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │ 01 Nov 25 09:15 UTC │
	│ start   │ -p NoKubernetes-413481 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                     │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │ 01 Nov 25 09:16 UTC │
	│ start   │ -p force-systemd-flag-773418 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-773418   │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │ 01 Nov 25 09:16 UTC │
	│ delete  │ -p NoKubernetes-413481                                                                                                                                                                                                    │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ start   │ -p NoKubernetes-413481 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                     │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ ssh     │ -p NoKubernetes-413481 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │                     │
	│ ssh     │ force-systemd-flag-773418 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-773418   │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ delete  │ -p force-systemd-flag-773418                                                                                                                                                                                              │ force-systemd-flag-773418   │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ start   │ -p cert-expiration-303094 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-303094      │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ stop    │ -p NoKubernetes-413481                                                                                                                                                                                                    │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ start   │ -p NoKubernetes-413481 --driver=docker  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ ssh     │ -p NoKubernetes-413481 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │                     │
	│ delete  │ -p NoKubernetes-413481                                                                                                                                                                                                    │ NoKubernetes-413481         │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ start   │ -p cert-options-403136 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-403136         │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │                     │
	│ start   │ -p pause-349394 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-349394                │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ delete  │ -p offline-crio-339605                                                                                                                                                                                                    │ offline-crio-339605         │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ pause   │ -p pause-349394 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-349394                │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │                     │
	│ start   │ -p stopped-upgrade-434419 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-434419      │ jenkins │ v1.32.0 │ 01 Nov 25 09:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:16:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:16:48.243116  205345 out.go:296] Setting OutFile to fd 1 ...
	I1101 09:16:48.243294  205345 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 09:16:48.243299  205345 out.go:309] Setting ErrFile to fd 2...
	I1101 09:16:48.243305  205345 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 09:16:48.243641  205345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:16:48.244397  205345 out.go:303] Setting JSON to false
	I1101 09:16:48.245800  205345 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3556,"bootTime":1761985052,"procs":277,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:16:48.245930  205345 start.go:138] virtualization: kvm guest
	I1101 09:16:48.252053  205345 out.go:177] * [stopped-upgrade-434419] minikube v1.32.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:16:48.253701  205345 out.go:177]   - MINIKUBE_LOCATION=21835
	I1101 09:16:48.253773  205345 notify.go:220] Checking for updates...
	I1101 09:16:48.255073  205345 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:16:48.256407  205345 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:16:48.257715  205345 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:16:48.259015  205345 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:16:48.260204  205345 out.go:177]   - KUBECONFIG=/tmp/legacy_kubeconfig951319388
	I1101 09:16:48.262556  205345 config.go:182] Loaded profile config "cert-expiration-303094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:16:48.262708  205345 config.go:182] Loaded profile config "cert-options-403136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:16:48.262937  205345 config.go:182] Loaded profile config "pause-349394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:16:48.263061  205345 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 09:16:48.291539  205345 docker.go:122] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:16:48.291698  205345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:16:48.322420  205345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/last_update_check: {Name:mk42d01780494c6f8fcce33eafe5b40e039516d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:16:48.324441  205345 out.go:177] * minikube 1.37.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.37.0
	I1101 09:16:48.326236  205345 out.go:177] * To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	I1101 09:16:48.359071  205345 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 09:16:48.347699894 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:16:48.359190  205345 docker.go:295] overlay module found
	I1101 09:16:48.361812  205345 out.go:177] * Using the docker driver based on user configuration
	I1101 09:16:48.363093  205345 start.go:298] selected driver: docker
	I1101 09:16:48.363103  205345 start.go:902] validating driver "docker" against <nil>
	I1101 09:16:48.363115  205345 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:16:48.363776  205345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:16:48.433535  205345 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 09:16:48.422037488 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:16:48.433707  205345 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1101 09:16:48.433985  205345 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:16:48.435618  205345 out.go:177] * Using Docker driver with root privileges
	I1101 09:16:48.436827  205345 cni.go:84] Creating CNI manager for ""
	I1101 09:16:48.436841  205345 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:16:48.436853  205345 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:16:48.436884  205345 start_flags.go:323] config:
	{Name:stopped-upgrade-434419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-434419 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 09:16:48.438357  205345 out.go:177] * Starting control plane node stopped-upgrade-434419 in cluster stopped-upgrade-434419
	I1101 09:16:48.439422  205345 cache.go:121] Beginning downloading kic base image for docker with crio
	I1101 09:16:48.441032  205345 out.go:177] * Pulling base image ...
	I1101 09:16:48.442201  205345 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 09:16:48.442309  205345 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1101 09:16:48.461549  205345 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1101 09:16:48.461757  205345 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1101 09:16:48.461785  205345 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1101 09:16:48.470081  205345 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1101 09:16:48.470113  205345 cache.go:56] Caching tarball of preloaded images
	I1101 09:16:48.470290  205345 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 09:16:48.472204  205345 out.go:177] * Downloading Kubernetes v1.28.3 preload ...
	I1101 09:16:48.065635  201023 out.go:252]   - Booting up control plane ...
	I1101 09:16:48.065786  201023 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:16:48.065906  201023 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:16:48.066652  201023 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:16:48.085356  201023 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:16:48.085492  201023 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:16:48.094202  201023 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:16:48.095284  201023 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:16:48.095592  201023 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:16:48.219267  201023 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:16:48.219412  201023 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:16:48.720092  201023 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.892644ms
	I1101 09:16:48.723103  201023 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:16:48.723247  201023 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8555/livez
	I1101 09:16:48.723368  201023 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:16:48.723468  201023 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.071193109Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.072038238Z" level=info msg="Conmon does support the --sync option"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.072055748Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.072074203Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.072889694Z" level=info msg="Conmon does support the --sync option"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.072923995Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.077271083Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.077298927Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.077897005Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.078321273Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.078372433Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.084270875Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.128496098Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-ffqwr Namespace:kube-system ID:743ccb28d9a050110a3139ca0fc0da50e044ac3031d3dfd8e5625ec3611bffe4 UID:921445fa-0791-4702-84d0-51ac75b88ec0 NetNS:/var/run/netns/ead8c5da-a367-467e-880c-399fb46259f1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001722c8}] Aliases:map[]}"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.128681993Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-ffqwr for CNI network kindnet (type=ptp)"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129261737Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129294155Z" level=info msg="Starting seccomp notifier watcher"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129397473Z" level=info msg="Create NRI interface"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129486907Z" level=info msg="built-in NRI default validator is disabled"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129497164Z" level=info msg="runtime interface created"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129506466Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129511591Z" level=info msg="runtime interface starting up..."
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129516581Z" level=info msg="starting plugins..."
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129528786Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 01 09:16:44 pause-349394 crio[2175]: time="2025-11-01T09:16:44.129934813Z" level=info msg="No systemd watchdog enabled"
	Nov 01 09:16:44 pause-349394 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	33344c58eff13       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago       Running             coredns                   0                   743ccb28d9a05       coredns-66bc5c9577-ffqwr               kube-system
	4a8d5a2e7c83c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   56 seconds ago       Running             kindnet-cni               0                   3344e3a15cb58       kindnet-cnnft                          kube-system
	68129655f11eb       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   56 seconds ago       Running             kube-proxy                0                   3f64450f1713f       kube-proxy-4xbbh                       kube-system
	8dfd3a6fbe519       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Running             kube-scheduler            0                   76c88c107f5e2       kube-scheduler-pause-349394            kube-system
	7bdecef247777       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Running             etcd                      0                   c37230063c7bc       etcd-pause-349394                      kube-system
	066ac59efc73e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Running             kube-apiserver            0                   337348cb26d21       kube-apiserver-pause-349394            kube-system
	b5819d2b49cc9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   About a minute ago   Running             kube-controller-manager   0                   33a16c82552f4       kube-controller-manager-pause-349394   kube-system
	
	
	==> coredns [33344c58eff1306b543a26dbf747e4baea69a19b868765195de20140ce386cf1] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51259 - 21949 "HINFO IN 7162759984627093211.7381152140018459443. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.087294302s
	
	
	==> describe nodes <==
	Name:               pause-349394
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-349394
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=pause-349394
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_15_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:15:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-349394
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:16:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:16:37 +0000   Sat, 01 Nov 2025 09:15:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:16:37 +0000   Sat, 01 Nov 2025 09:15:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:16:37 +0000   Sat, 01 Nov 2025 09:15:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:16:37 +0000   Sat, 01 Nov 2025 09:16:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-349394
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                d38e44a7-c7da-40cd-89b5-aa20695b16e5
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-ffqwr                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     57s
	  kube-system                 etcd-pause-349394                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         62s
	  kube-system                 kindnet-cnnft                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-pause-349394             250m (3%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-pause-349394    200m (2%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-4xbbh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-pause-349394             100m (1%)     0 (0%)      0 (0%)           0 (0%)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 56s   kube-proxy       
	  Normal  Starting                 62s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s   kubelet          Node pause-349394 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s   kubelet          Node pause-349394 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s   kubelet          Node pause-349394 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           58s   node-controller  Node pause-349394 event: Registered Node pause-349394 in Controller
	  Normal  NodeReady                15s   kubelet          Node pause-349394 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.047726] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[Nov 1 08:38] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.215674] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.047939] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000023] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023915] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023867] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023880] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000033] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023884] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +4.031524] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +8.255123] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	
	
	==> etcd [7bdecef247777d9b2c200252ec3f7f1536d2d34b567c29c214d7d9eac204c90c] <==
	{"level":"warn","ts":"2025-11-01T09:15:47.074725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.082846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.089925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.097324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.106060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.113445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.124280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.133093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.144933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.162253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.166706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.174167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.186152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:15:47.257697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46010","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:16:02.687693Z","caller":"traceutil/trace.go:172","msg":"trace[2122532009] linearizableReadLoop","detail":"{readStateIndex:391; appliedIndex:391; }","duration":"225.80583ms","start":"2025-11-01T09:16:02.461860Z","end":"2025-11-01T09:16:02.687666Z","steps":["trace[2122532009] 'read index received'  (duration: 225.797115ms)","trace[2122532009] 'applied index is now lower than readState.Index'  (duration: 7.295µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:16:02.687833Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"225.940796ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:16:02.687924Z","caller":"traceutil/trace.go:172","msg":"trace[669194114] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:379; }","duration":"226.064392ms","start":"2025-11-01T09:16:02.461848Z","end":"2025-11-01T09:16:02.687913Z","steps":["trace[669194114] 'agreement among raft nodes before linearized reading'  (duration: 225.90722ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:16:02.688106Z","caller":"traceutil/trace.go:172","msg":"trace[1205053273] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"253.352379ms","start":"2025-11-01T09:16:02.434741Z","end":"2025-11-01T09:16:02.688094Z","steps":["trace[1205053273] 'process raft request'  (duration: 252.982167ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:16:02.887498Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.119655ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765875354749870 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-349394\" mod_revision:380 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-349394\" value_size:4706 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-349394\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T09:16:02.887611Z","caller":"traceutil/trace.go:172","msg":"trace[1223324963] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"187.186459ms","start":"2025-11-01T09:16:02.700409Z","end":"2025-11-01T09:16:02.887596Z","steps":["trace[1223324963] 'process raft request'  (duration: 52.09081ms)","trace[1223324963] 'compare'  (duration: 133.963105ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:16:37.918845Z","caller":"traceutil/trace.go:172","msg":"trace[638945511] linearizableReadLoop","detail":"{readStateIndex:416; appliedIndex:416; }","duration":"105.864934ms","start":"2025-11-01T09:16:37.812953Z","end":"2025-11-01T09:16:37.918818Z","steps":["trace[638945511] 'read index received'  (duration: 105.851397ms)","trace[638945511] 'applied index is now lower than readState.Index'  (duration: 12.213µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:16:37.918985Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.012712ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:16:37.919050Z","caller":"traceutil/trace.go:172","msg":"trace[1774635505] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:397; }","duration":"106.094646ms","start":"2025-11-01T09:16:37.812943Z","end":"2025-11-01T09:16:37.919037Z","steps":["trace[1774635505] 'agreement among raft nodes before linearized reading'  (duration: 105.951662ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:16:37.919090Z","caller":"traceutil/trace.go:172","msg":"trace[544685844] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"128.457606ms","start":"2025-11-01T09:16:37.790618Z","end":"2025-11-01T09:16:37.919075Z","steps":["trace[544685844] 'process raft request'  (duration: 128.228911ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:16:38.721263Z","caller":"traceutil/trace.go:172","msg":"trace[665718669] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"146.484164ms","start":"2025-11-01T09:16:38.574762Z","end":"2025-11-01T09:16:38.721246Z","steps":["trace[665718669] 'process raft request'  (duration: 146.38324ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:16:52 up 59 min,  0 user,  load average: 2.66, 1.18, 0.87
	Linux pause-349394 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4a8d5a2e7c83c5b4e8afac776468c5f7daaf2755b7a303482ad7dc0d21622766] <==
	I1101 09:15:56.229542       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:15:56.229836       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 09:15:56.230032       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:15:56.230052       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:15:56.230076       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:15:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:15:56.428980       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:15:56.429008       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:15:56.429025       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:15:56.449849       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 09:16:26.430536       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 09:16:26.430532       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 09:16:26.430532       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:16:26.451207       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1101 09:16:27.629751       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:16:27.629808       1 metrics.go:72] Registering metrics
	I1101 09:16:27.629914       1 controller.go:711] "Syncing nftables rules"
	I1101 09:16:36.429139       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:16:36.429220       1 main.go:301] handling current node
	I1101 09:16:46.434955       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:16:46.435016       1 main.go:301] handling current node
	
	
	==> kube-apiserver [066ac59efc73e772486d4d3af8783931652a5d9a99a996276f490885d836b19b] <==
	I1101 09:15:47.803641       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:15:47.803661       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:15:47.804095       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:15:47.804753       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 09:15:47.814561       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:15:47.817261       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:15:47.839566       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:15:47.844486       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:15:48.720601       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:15:48.728084       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:15:48.728110       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:15:49.454322       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:15:49.510599       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:15:49.612897       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:15:49.632265       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1101 09:15:49.633706       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:15:49.639795       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:15:49.756175       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:15:50.434128       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:15:50.457490       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:15:50.468919       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:15:55.508951       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1101 09:15:55.792110       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:15:55.796632       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:15:55.811340       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b5819d2b49cc99ce150a3c219c09f4887b9aa916a34d826d6b47a31ec0d67fc2] <==
	I1101 09:15:54.753282       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 09:15:54.753293       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:15:54.753612       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:15:54.753640       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:15:54.754441       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:15:54.754468       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:15:54.754554       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:15:54.754598       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:15:54.754616       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:15:54.754642       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-349394"
	I1101 09:15:54.754681       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 09:15:54.754874       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:15:54.755748       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:15:54.755781       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:15:54.757339       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:15:54.757357       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:15:54.757366       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:15:54.759211       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:15:54.760679       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:15:54.761646       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:15:54.761683       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:15:54.761696       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:15:54.764996       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:15:54.781502       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:16:39.759490       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [68129655f11eb2fbbff91f1a063a3abde455c30c6d01387be8b1f621cd09a76c] <==
	I1101 09:15:56.041049       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:15:56.130593       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:15:56.231917       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:15:56.231953       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1101 09:15:56.232058       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:15:56.253982       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:15:56.254041       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:15:56.261064       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:15:56.261527       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:15:56.261558       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:15:56.263255       1 config.go:200] "Starting service config controller"
	I1101 09:15:56.263271       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:15:56.263292       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:15:56.263290       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:15:56.263321       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:15:56.263329       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:15:56.263378       1 config.go:309] "Starting node config controller"
	I1101 09:15:56.263420       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:15:56.263431       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:15:56.363480       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:15:56.363480       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:15:56.364093       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8dfd3a6fbe5192e33887de6a4599d920604aacc1bcb4ae7456344f2139e27f32] <==
	E1101 09:15:47.888610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:15:47.888675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:15:47.888826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:15:47.888905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:15:47.888956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:15:47.889018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:15:47.889063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:15:47.889100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:15:47.889132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:15:47.889180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:15:48.729405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:15:48.768791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:15:48.777820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:15:48.804347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:15:48.825203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:15:48.825203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:15:48.911381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:15:48.919095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:15:48.919301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:15:49.003068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:15:49.062769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:15:49.095646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:15:49.178218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:15:49.186166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 09:15:52.172336       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:15:54 pause-349394 kubelet[1298]: I1101 09:15:54.768544    1298 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 09:15:54 pause-349394 kubelet[1298]: I1101 09:15:54.769318    1298 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 09:15:55 pause-349394 kubelet[1298]: I1101 09:15:55.577382    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35381609-683b-4b5c-b820-05f32d4ae095-xtables-lock\") pod \"kube-proxy-4xbbh\" (UID: \"35381609-683b-4b5c-b820-05f32d4ae095\") " pod="kube-system/kube-proxy-4xbbh"
	Nov 01 09:15:55 pause-349394 kubelet[1298]: I1101 09:15:55.577443    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/35381609-683b-4b5c-b820-05f32d4ae095-kube-proxy\") pod \"kube-proxy-4xbbh\" (UID: \"35381609-683b-4b5c-b820-05f32d4ae095\") " pod="kube-system/kube-proxy-4xbbh"
	Nov 01 09:15:55 pause-349394 kubelet[1298]: I1101 09:15:55.577648    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35381609-683b-4b5c-b820-05f32d4ae095-lib-modules\") pod \"kube-proxy-4xbbh\" (UID: \"35381609-683b-4b5c-b820-05f32d4ae095\") " pod="kube-system/kube-proxy-4xbbh"
	Nov 01 09:15:55 pause-349394 kubelet[1298]: I1101 09:15:55.577677    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th4fk\" (UniqueName: \"kubernetes.io/projected/35381609-683b-4b5c-b820-05f32d4ae095-kube-api-access-th4fk\") pod \"kube-proxy-4xbbh\" (UID: \"35381609-683b-4b5c-b820-05f32d4ae095\") " pod="kube-system/kube-proxy-4xbbh"
	Nov 01 09:15:55 pause-349394 kubelet[1298]: I1101 09:15:55.577734    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4ecaeeef-5e55-4e20-abe4-5294a0b245ee-cni-cfg\") pod \"kindnet-cnnft\" (UID: \"4ecaeeef-5e55-4e20-abe4-5294a0b245ee\") " pod="kube-system/kindnet-cnnft"
	Nov 01 09:15:55 pause-349394 kubelet[1298]: I1101 09:15:55.577808    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ecaeeef-5e55-4e20-abe4-5294a0b245ee-xtables-lock\") pod \"kindnet-cnnft\" (UID: \"4ecaeeef-5e55-4e20-abe4-5294a0b245ee\") " pod="kube-system/kindnet-cnnft"
	Nov 01 09:15:55 pause-349394 kubelet[1298]: I1101 09:15:55.577830    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ecaeeef-5e55-4e20-abe4-5294a0b245ee-lib-modules\") pod \"kindnet-cnnft\" (UID: \"4ecaeeef-5e55-4e20-abe4-5294a0b245ee\") " pod="kube-system/kindnet-cnnft"
	Nov 01 09:15:55 pause-349394 kubelet[1298]: I1101 09:15:55.577894    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhjsh\" (UniqueName: \"kubernetes.io/projected/4ecaeeef-5e55-4e20-abe4-5294a0b245ee-kube-api-access-fhjsh\") pod \"kindnet-cnnft\" (UID: \"4ecaeeef-5e55-4e20-abe4-5294a0b245ee\") " pod="kube-system/kindnet-cnnft"
	Nov 01 09:15:56 pause-349394 kubelet[1298]: I1101 09:15:56.475513    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4xbbh" podStartSLOduration=1.475492976 podStartE2EDuration="1.475492976s" podCreationTimestamp="2025-11-01 09:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:15:56.475478886 +0000 UTC m=+6.258195671" watchObservedRunningTime="2025-11-01 09:15:56.475492976 +0000 UTC m=+6.258209760"
	Nov 01 09:15:56 pause-349394 kubelet[1298]: I1101 09:15:56.502657    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cnnft" podStartSLOduration=1.502623277 podStartE2EDuration="1.502623277s" podCreationTimestamp="2025-11-01 09:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:15:56.489680272 +0000 UTC m=+6.272397062" watchObservedRunningTime="2025-11-01 09:15:56.502623277 +0000 UTC m=+6.285340062"
	Nov 01 09:16:37 pause-349394 kubelet[1298]: I1101 09:16:37.036223    1298 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 09:16:37 pause-349394 kubelet[1298]: I1101 09:16:37.186478    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/921445fa-0791-4702-84d0-51ac75b88ec0-config-volume\") pod \"coredns-66bc5c9577-ffqwr\" (UID: \"921445fa-0791-4702-84d0-51ac75b88ec0\") " pod="kube-system/coredns-66bc5c9577-ffqwr"
	Nov 01 09:16:37 pause-349394 kubelet[1298]: I1101 09:16:37.186547    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g79j\" (UniqueName: \"kubernetes.io/projected/921445fa-0791-4702-84d0-51ac75b88ec0-kube-api-access-8g79j\") pod \"coredns-66bc5c9577-ffqwr\" (UID: \"921445fa-0791-4702-84d0-51ac75b88ec0\") " pod="kube-system/coredns-66bc5c9577-ffqwr"
	Nov 01 09:16:38 pause-349394 kubelet[1298]: I1101 09:16:38.724516    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ffqwr" podStartSLOduration=43.72449306 podStartE2EDuration="43.72449306s" podCreationTimestamp="2025-11-01 09:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:16:38.722696423 +0000 UTC m=+48.505413230" watchObservedRunningTime="2025-11-01 09:16:38.72449306 +0000 UTC m=+48.507209845"
	Nov 01 09:16:42 pause-349394 kubelet[1298]: W1101 09:16:42.414578    1298 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 01 09:16:42 pause-349394 kubelet[1298]: E1101 09:16:42.415730    1298 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Nov 01 09:16:42 pause-349394 kubelet[1298]: E1101 09:16:42.415838    1298 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 01 09:16:42 pause-349394 kubelet[1298]: E1101 09:16:42.415879    1298 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 01 09:16:42 pause-349394 kubelet[1298]: E1101 09:16:42.415906    1298 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 01 09:16:47 pause-349394 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:16:47 pause-349394 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:16:47 pause-349394 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:16:47 pause-349394 systemd[1]: kubelet.service: Consumed 2.362s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-349394 -n pause-349394
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-349394 -n pause-349394: exit status 2 (396.242248ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-349394 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-152344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-152344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (286.078798ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:19:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-152344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-152344 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-152344 describe deploy/metrics-server -n kube-system: exit status 1 (67.44701ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-152344 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-152344
helpers_test.go:243: (dbg) docker inspect old-k8s-version-152344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe",
	        "Created": "2025-11-01T09:18:45.394454049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 231385,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:18:45.443618817Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe/hostname",
	        "HostsPath": "/var/lib/docker/containers/89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe/hosts",
	        "LogPath": "/var/lib/docker/containers/89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe/89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe-json.log",
	        "Name": "/old-k8s-version-152344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-152344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-152344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe",
	                "LowerDir": "/var/lib/docker/overlay2/2167d10a6be83eefc462824ae671de179964763d19a49dc3e2df049d863ec511-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2167d10a6be83eefc462824ae671de179964763d19a49dc3e2df049d863ec511/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2167d10a6be83eefc462824ae671de179964763d19a49dc3e2df049d863ec511/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2167d10a6be83eefc462824ae671de179964763d19a49dc3e2df049d863ec511/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-152344",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-152344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-152344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-152344",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-152344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3542df44b1cb4696110b5af87591db244e8a5b7472d81a5c1ce543b275d089bb",
	            "SandboxKey": "/var/run/docker/netns/3542df44b1cb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-152344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:bb:e0:e5:03:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "afdc78f81dc3d841ed82d50aa51ef8a188690396e16fb0187f6b53f70953a37a",
	                    "EndpointID": "8eae391b0f800906f75819b9ca0a111e2420d3d7ae4964a1b1f880b259c6ed4a",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-152344",
	                        "89c3ec5c14cb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-152344 -n old-k8s-version-152344
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-152344 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-152344 logs -n 25: (1.106349617s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-204434 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                                                        │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                        │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                         │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo docker system info                                                                                                                                                                                                      │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo containerd config dump                                                                                                                                                                                                  │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo crio config                                                                                                                                                                                                             │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ delete  │ -p cilium-204434                                                                                                                                                                                                                              │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:18 UTC │
	│ start   │ -p old-k8s-version-152344 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:19 UTC │
	│ delete  │ -p running-upgrade-274843                                                                                                                                                                                                                     │ running-upgrade-274843 │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:18 UTC │
	│ start   │ -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-152344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:18:47
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:18:47.836610  232578 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:18:47.836911  232578 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:18:47.836922  232578 out.go:374] Setting ErrFile to fd 2...
	I1101 09:18:47.836926  232578 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:18:47.837166  232578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:18:47.837683  232578 out.go:368] Setting JSON to false
	I1101 09:18:47.838809  232578 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3676,"bootTime":1761985052,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:18:47.838916  232578 start.go:143] virtualization: kvm guest
	I1101 09:18:47.841280  232578 out.go:179] * [no-preload-397460] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:18:47.842735  232578 notify.go:221] Checking for updates...
	I1101 09:18:47.842799  232578 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:18:47.844817  232578 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:18:47.846633  232578 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:18:47.848048  232578 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:18:47.849503  232578 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:18:47.850852  232578 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:18:47.852681  232578 config.go:182] Loaded profile config "cert-expiration-303094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:18:47.852787  232578 config.go:182] Loaded profile config "kubernetes-upgrade-846924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:18:47.852917  232578 config.go:182] Loaded profile config "old-k8s-version-152344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:18:47.853007  232578 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:18:47.878838  232578 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:18:47.878960  232578 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:18:47.943422  232578 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 09:18:47.931414024 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:18:47.943563  232578 docker.go:319] overlay module found
	I1101 09:18:47.945708  232578 out.go:179] * Using the docker driver based on user configuration
	I1101 09:18:47.947025  232578 start.go:309] selected driver: docker
	I1101 09:18:47.947045  232578 start.go:930] validating driver "docker" against <nil>
	I1101 09:18:47.947075  232578 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:18:47.947783  232578 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:18:48.007511  232578 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 09:18:47.997599608 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:18:48.007676  232578 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:18:48.007927  232578 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:18:48.009792  232578 out.go:179] * Using Docker driver with root privileges
	I1101 09:18:48.011053  232578 cni.go:84] Creating CNI manager for ""
	I1101 09:18:48.011124  232578 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:18:48.011136  232578 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:18:48.011199  232578 start.go:353] cluster config:
	{Name:no-preload-397460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-397460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:18:48.012459  232578 out.go:179] * Starting "no-preload-397460" primary control-plane node in "no-preload-397460" cluster
	I1101 09:18:48.013565  232578 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:18:48.014899  232578 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:18:48.015958  232578 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:18:48.015987  232578 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:18:48.016096  232578 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/config.json ...
	I1101 09:18:48.016139  232578 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/config.json: {Name:mk1e4887dfa066f92223011e3e1f72a4623080b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:18:48.016215  232578 cache.go:107] acquiring lock: {Name:mk3da340e5af70247539f8d922cc7bcce42509cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:18:48.016277  232578 cache.go:115] /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1101 09:18:48.016288  232578 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 81.696µs
	I1101 09:18:48.016266  232578 cache.go:107] acquiring lock: {Name:mkd44e5d327380ad6f0bfcd24859998cff83b1da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:18:48.016306  232578 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1101 09:18:48.016276  232578 cache.go:107] acquiring lock: {Name:mkeb0cf358eb16140604b4a70399a3a029115110 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:18:48.016322  232578 cache.go:107] acquiring lock: {Name:mk653da29d9cc7e07521281dd09bd564dc663636 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:18:48.016326  232578 cache.go:107] acquiring lock: {Name:mk62a069595cd51732c873403f79b944c968023c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:18:48.016355  232578 cache.go:107] acquiring lock: {Name:mk7e6b43fbb8c3177a2ddbf45490c8c23268d610 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:18:48.016367  232578 cache.go:107] acquiring lock: {Name:mkd3b96e72872fe46da4959b7624b5cd21026b8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:18:48.016422  232578 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1101 09:18:48.016406  232578 cache.go:107] acquiring lock: {Name:mk5f63b0ef2d772b57fd677bfc33b86408c18616 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:18:48.016494  232578 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 09:18:48.016503  232578 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 09:18:48.016523  232578 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 09:18:48.016538  232578 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1101 09:18:48.016608  232578 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 09:18:48.016433  232578 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 09:18:48.017742  232578 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 09:18:48.017764  232578 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 09:18:48.017792  232578 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1101 09:18:48.017760  232578 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 09:18:48.017772  232578 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1101 09:18:48.017823  232578 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 09:18:48.017752  232578 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 09:18:48.039326  232578 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:18:48.039345  232578 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:18:48.039360  232578 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:18:48.039382  232578 start.go:360] acquireMachinesLock for no-preload-397460: {Name:mk53345d4b51e8783ff01ad93264377536fe034e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:18:48.039469  232578 start.go:364] duration metric: took 70.256µs to acquireMachinesLock for "no-preload-397460"
	I1101 09:18:48.039493  232578 start.go:93] Provisioning new machine with config: &{Name:no-preload-397460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-397460 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:18:48.039562  232578 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:18:45.315492  230484 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-152344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.252343146s)
	I1101 09:18:45.315528  230484 kic.go:203] duration metric: took 5.252495856s to extract preloaded images to volume ...
	W1101 09:18:45.315628  230484 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 09:18:45.315669  230484 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 09:18:45.315721  230484 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:18:45.376800  230484 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-152344 --name old-k8s-version-152344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-152344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-152344 --network old-k8s-version-152344 --ip 192.168.103.2 --volume old-k8s-version-152344:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:18:45.704602  230484 cli_runner.go:164] Run: docker container inspect old-k8s-version-152344 --format={{.State.Running}}
	I1101 09:18:45.730800  230484 cli_runner.go:164] Run: docker container inspect old-k8s-version-152344 --format={{.State.Status}}
	I1101 09:18:45.752240  230484 cli_runner.go:164] Run: docker exec old-k8s-version-152344 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:18:45.798221  230484 oci.go:144] the created container "old-k8s-version-152344" has a running status.
	I1101 09:18:45.798250  230484 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/old-k8s-version-152344/id_rsa...
	I1101 09:18:46.146815  230484 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-5913/.minikube/machines/old-k8s-version-152344/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:18:46.172978  230484 cli_runner.go:164] Run: docker container inspect old-k8s-version-152344 --format={{.State.Status}}
	I1101 09:18:46.192587  230484 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:18:46.192613  230484 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-152344 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:18:46.235087  230484 cli_runner.go:164] Run: docker container inspect old-k8s-version-152344 --format={{.State.Status}}
	I1101 09:18:46.255010  230484 machine.go:94] provisionDockerMachine start ...
	I1101 09:18:46.255100  230484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-152344
	I1101 09:18:46.274438  230484 main.go:143] libmachine: Using SSH client type: native
	I1101 09:18:46.274693  230484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1101 09:18:46.274708  230484 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:18:46.419268  230484 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-152344
	
	I1101 09:18:46.419300  230484 ubuntu.go:182] provisioning hostname "old-k8s-version-152344"
	I1101 09:18:46.419355  230484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-152344
	I1101 09:18:46.439681  230484 main.go:143] libmachine: Using SSH client type: native
	I1101 09:18:46.439954  230484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1101 09:18:46.439971  230484 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-152344 && echo "old-k8s-version-152344" | sudo tee /etc/hostname
	I1101 09:18:46.596920  230484 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-152344
	
	I1101 09:18:46.597013  230484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-152344
	I1101 09:18:46.616399  230484 main.go:143] libmachine: Using SSH client type: native
	I1101 09:18:46.616620  230484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1101 09:18:46.616641  230484 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-152344' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-152344/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-152344' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:18:46.760138  230484 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:18:46.760166  230484 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5913/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5913/.minikube}
	I1101 09:18:46.760198  230484 ubuntu.go:190] setting up certificates
	I1101 09:18:46.760210  230484 provision.go:84] configureAuth start
	I1101 09:18:46.760258  230484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-152344
	I1101 09:18:46.781579  230484 provision.go:143] copyHostCerts
	I1101 09:18:46.781637  230484 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem, removing ...
	I1101 09:18:46.781648  230484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem
	I1101 09:18:46.781733  230484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem (1675 bytes)
	I1101 09:18:46.781907  230484 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem, removing ...
	I1101 09:18:46.781920  230484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem
	I1101 09:18:46.781967  230484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem (1078 bytes)
	I1101 09:18:46.782058  230484 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem, removing ...
	I1101 09:18:46.782070  230484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem
	I1101 09:18:46.782110  230484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem (1123 bytes)
	I1101 09:18:46.782194  230484 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-152344 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-152344]
	I1101 09:18:47.224443  230484 provision.go:177] copyRemoteCerts
	I1101 09:18:47.224505  230484 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:18:47.224556  230484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-152344
	I1101 09:18:47.244566  230484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/old-k8s-version-152344/id_rsa Username:docker}
	I1101 09:18:47.348639  230484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:18:47.375201  230484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 09:18:47.393969  230484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:18:47.413252  230484 provision.go:87] duration metric: took 653.019169ms to configureAuth
	I1101 09:18:47.413289  230484 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:18:47.413453  230484 config.go:182] Loaded profile config "old-k8s-version-152344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:18:47.413553  230484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-152344
	I1101 09:18:47.433958  230484 main.go:143] libmachine: Using SSH client type: native
	I1101 09:18:47.434205  230484 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1101 09:18:47.434231  230484 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:18:47.718055  230484 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:18:47.718084  230484 machine.go:97] duration metric: took 1.463048383s to provisionDockerMachine
	I1101 09:18:47.718096  230484 client.go:176] duration metric: took 8.279281831s to LocalClient.Create
	I1101 09:18:47.718120  230484 start.go:167] duration metric: took 8.279342872s to libmachine.API.Create "old-k8s-version-152344"
	I1101 09:18:47.718133  230484 start.go:293] postStartSetup for "old-k8s-version-152344" (driver="docker")
	I1101 09:18:47.718147  230484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:18:47.718218  230484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:18:47.718273  230484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-152344
	I1101 09:18:47.739746  230484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/old-k8s-version-152344/id_rsa Username:docker}
	I1101 09:18:47.845145  230484 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:18:47.849026  230484 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:18:47.849059  230484 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:18:47.849070  230484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/addons for local assets ...
	I1101 09:18:47.849126  230484 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/files for local assets ...
	I1101 09:18:47.849195  230484 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem -> 94142.pem in /etc/ssl/certs
	I1101 09:18:47.849277  230484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:18:47.857451  230484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:18:47.880339  230484 start.go:296] duration metric: took 162.193853ms for postStartSetup
	I1101 09:18:47.880729  230484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-152344
	I1101 09:18:47.901274  230484 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/config.json ...
	I1101 09:18:47.901615  230484 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:18:47.901674  230484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-152344
	I1101 09:18:47.926193  230484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/old-k8s-version-152344/id_rsa Username:docker}
	I1101 09:18:48.027860  230484 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:18:48.032848  230484 start.go:128] duration metric: took 8.596122412s to createHost
	I1101 09:18:48.032900  230484 start.go:83] releasing machines lock for "old-k8s-version-152344", held for 8.596312681s
	I1101 09:18:48.032995  230484 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-152344
	I1101 09:18:48.054331  230484 ssh_runner.go:195] Run: cat /version.json
	I1101 09:18:48.054389  230484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-152344
	I1101 09:18:48.054396  230484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:18:48.054459  230484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-152344
	I1101 09:18:48.075406  230484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/old-k8s-version-152344/id_rsa Username:docker}
	I1101 09:18:48.075689  230484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/old-k8s-version-152344/id_rsa Username:docker}
	I1101 09:18:48.235377  230484 ssh_runner.go:195] Run: systemctl --version
	I1101 09:18:48.243441  230484 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:18:48.284209  230484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:18:48.289789  230484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:18:48.289854  230484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:18:48.322204  230484 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:18:48.322224  230484 start.go:496] detecting cgroup driver to use...
	I1101 09:18:48.322253  230484 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:18:48.322298  230484 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:18:48.339299  230484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:18:48.353258  230484 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:18:48.353304  230484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:18:48.372944  230484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:18:48.397991  230484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:18:48.502102  230484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:18:48.618580  230484 docker.go:234] disabling docker service ...
	I1101 09:18:48.618647  230484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:18:48.642528  230484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:18:48.658033  230484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:18:48.761109  230484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:18:48.856553  230484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:18:48.870515  230484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:18:48.889754  230484 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 09:18:48.889820  230484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:18:48.904435  230484 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:18:48.904489  230484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:18:48.914428  230484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:18:48.925481  230484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:18:48.935785  230484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:18:48.944726  230484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:18:48.954708  230484 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:18:48.969514  230484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:18:48.979512  230484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:18:48.988018  230484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:18:48.996303  230484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:18:49.098364  230484 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:18:49.220005  230484 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:18:49.220085  230484 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:18:49.224273  230484 start.go:564] Will wait 60s for crictl version
	I1101 09:18:49.224329  230484 ssh_runner.go:195] Run: which crictl
	I1101 09:18:49.228143  230484 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:18:49.255661  230484 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:18:49.255742  230484 ssh_runner.go:195] Run: crio --version
	I1101 09:18:49.289757  230484 ssh_runner.go:195] Run: crio --version
	I1101 09:18:49.326651  230484 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1101 09:18:45.048183  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:18:45.048688  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:18:45.548052  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:18:45.548461  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:18:46.047955  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:18:46.048402  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:18:46.547983  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:18:49.328111  230484 cli_runner.go:164] Run: docker network inspect old-k8s-version-152344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:18:49.357663  230484 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1101 09:18:49.364341  230484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:18:49.383833  230484 kubeadm.go:884] updating cluster {Name:old-k8s-version-152344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-152344 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:18:49.384143  230484 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:18:49.384243  230484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:18:49.436264  230484 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:18:49.436292  230484 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:18:49.436345  230484 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:18:49.483134  230484 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:18:49.483161  230484 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:18:49.483171  230484 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 crio true true} ...
	I1101 09:18:49.483407  230484 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-152344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-152344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:18:49.483535  230484 ssh_runner.go:195] Run: crio config
	I1101 09:18:49.561506  230484 cni.go:84] Creating CNI manager for ""
	I1101 09:18:49.561534  230484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:18:49.561554  230484 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:18:49.561584  230484 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-152344 NodeName:old-k8s-version-152344 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:18:49.561762  230484 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-152344"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:18:49.561835  230484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 09:18:49.579920  230484 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:18:49.579996  230484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:18:49.589256  230484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1101 09:18:49.607124  230484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:18:49.629580  230484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1101 09:18:49.649267  230484 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:18:49.654670  230484 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:18:49.668670  230484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:18:49.774098  230484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:18:49.797143  230484 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344 for IP: 192.168.103.2
	I1101 09:18:49.797166  230484 certs.go:195] generating shared ca certs ...
	I1101 09:18:49.797185  230484 certs.go:227] acquiring lock for ca certs: {Name:mkfdee6a84670347521013ebeef165551380cb9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:18:49.797343  230484 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key
	I1101 09:18:49.797414  230484 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key
	I1101 09:18:49.797432  230484 certs.go:257] generating profile certs ...
	I1101 09:18:49.797494  230484 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/client.key
	I1101 09:18:49.797552  230484 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/client.crt with IP's: []
	I1101 09:18:49.964718  230484 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/client.crt ...
	I1101 09:18:49.964746  230484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/client.crt: {Name:mk9844c9e8a9c1337f05b78781bbd8fa23f990e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:18:49.964944  230484 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/client.key ...
	I1101 09:18:49.964966  230484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/client.key: {Name:mk6c0d64c68a51c8cc8b7abbdc166e0764ca9770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:18:49.965060  230484 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/apiserver.key.6347e79d
	I1101 09:18:49.965075  230484 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/apiserver.crt.6347e79d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1101 09:18:50.218439  230484 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/apiserver.crt.6347e79d ...
	I1101 09:18:50.218467  230484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/apiserver.crt.6347e79d: {Name:mk7be222f876d2ff57aee8b9935ddf1c34d80827 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:18:50.218665  230484 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/apiserver.key.6347e79d ...
	I1101 09:18:50.218683  230484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/apiserver.key.6347e79d: {Name:mk80ded9dec4045e4d92ae8a45808144766e7506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:18:50.218798  230484 certs.go:382] copying /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/apiserver.crt.6347e79d -> /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/apiserver.crt
	I1101 09:18:50.218913  230484 certs.go:386] copying /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/apiserver.key.6347e79d -> /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/apiserver.key
	I1101 09:18:50.218995  230484 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/proxy-client.key
	I1101 09:18:50.219012  230484 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/proxy-client.crt with IP's: []
	I1101 09:18:50.515524  230484 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/proxy-client.crt ...
	I1101 09:18:50.515554  230484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/proxy-client.crt: {Name:mk28f3eb21be38596cb463fcb5ca96a077c0df5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:18:50.515786  230484 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/proxy-client.key ...
	I1101 09:18:50.515817  230484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/proxy-client.key: {Name:mka61fe7ead43db49bb57f67f336007db38842bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:18:50.516073  230484 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem (1338 bytes)
	W1101 09:18:50.516114  230484 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414_empty.pem, impossibly tiny 0 bytes
	I1101 09:18:50.516124  230484 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:18:50.516146  230484 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:18:50.516169  230484 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:18:50.516192  230484 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem (1675 bytes)
	I1101 09:18:50.516232  230484 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:18:50.516780  230484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:18:50.536195  230484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:18:50.554652  230484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:18:50.573837  230484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:18:50.592320  230484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 09:18:50.610813  230484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:18:50.630193  230484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:18:50.648813  230484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:18:50.668338  230484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /usr/share/ca-certificates/94142.pem (1708 bytes)
	I1101 09:18:50.692333  230484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:18:50.712523  230484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem --> /usr/share/ca-certificates/9414.pem (1338 bytes)
	I1101 09:18:50.731522  230484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:18:50.744576  230484 ssh_runner.go:195] Run: openssl version
	I1101 09:18:50.750800  230484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9414.pem && ln -fs /usr/share/ca-certificates/9414.pem /etc/ssl/certs/9414.pem"
	I1101 09:18:50.759985  230484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9414.pem
	I1101 09:18:50.763962  230484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:35 /usr/share/ca-certificates/9414.pem
	I1101 09:18:50.764023  230484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9414.pem
	I1101 09:18:50.798794  230484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9414.pem /etc/ssl/certs/51391683.0"
	I1101 09:18:50.807966  230484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94142.pem && ln -fs /usr/share/ca-certificates/94142.pem /etc/ssl/certs/94142.pem"
	I1101 09:18:50.816822  230484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94142.pem
	I1101 09:18:50.820804  230484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:35 /usr/share/ca-certificates/94142.pem
	I1101 09:18:50.820859  230484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94142.pem
	I1101 09:18:50.856326  230484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94142.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:18:50.865178  230484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:18:50.874465  230484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:18:50.878622  230484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:18:50.878682  230484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:18:50.918155  230484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:18:50.928301  230484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:18:50.932455  230484 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:18:50.932552  230484 kubeadm.go:401] StartCluster: {Name:old-k8s-version-152344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-152344 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:18:50.932627  230484 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:18:50.932694  230484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:18:50.961188  230484 cri.go:89] found id: ""
	I1101 09:18:50.961250  230484 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:18:50.970414  230484 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:18:50.979401  230484 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:18:50.979470  230484 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:18:50.989231  230484 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:18:50.989252  230484 kubeadm.go:158] found existing configuration files:
	
	I1101 09:18:50.989301  230484 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:18:50.997284  230484 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:18:50.997341  230484 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:18:51.005115  230484 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:18:51.013781  230484 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:18:51.013846  230484 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:18:51.021347  230484 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:18:51.029092  230484 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:18:51.029152  230484 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:18:51.036846  230484 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:18:51.044430  230484 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:18:51.044475  230484 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:18:51.051937  230484 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:18:51.097614  230484 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1101 09:18:51.097683  230484 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:18:51.135871  230484 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:18:51.135964  230484 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 09:18:51.136034  230484 kubeadm.go:319] OS: Linux
	I1101 09:18:51.136125  230484 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:18:51.136171  230484 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:18:51.136254  230484 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:18:51.136334  230484 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:18:51.136413  230484 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:18:51.136489  230484 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:18:51.136558  230484 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:18:51.136628  230484 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 09:18:51.213092  230484 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:18:51.213228  230484 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:18:51.213378  230484 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 09:18:51.368075  230484 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:18:48.042027  232578 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:18:48.042222  232578 start.go:159] libmachine.API.Create for "no-preload-397460" (driver="docker")
	I1101 09:18:48.042249  232578 client.go:173] LocalClient.Create starting
	I1101 09:18:48.042295  232578 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem
	I1101 09:18:48.042321  232578 main.go:143] libmachine: Decoding PEM data...
	I1101 09:18:48.042338  232578 main.go:143] libmachine: Parsing certificate...
	I1101 09:18:48.042393  232578 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem
	I1101 09:18:48.042411  232578 main.go:143] libmachine: Decoding PEM data...
	I1101 09:18:48.042421  232578 main.go:143] libmachine: Parsing certificate...
	I1101 09:18:48.042763  232578 cli_runner.go:164] Run: docker network inspect no-preload-397460 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:18:48.062458  232578 cli_runner.go:211] docker network inspect no-preload-397460 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:18:48.062535  232578 network_create.go:284] running [docker network inspect no-preload-397460] to gather additional debugging logs...
	I1101 09:18:48.062559  232578 cli_runner.go:164] Run: docker network inspect no-preload-397460
	W1101 09:18:48.082459  232578 cli_runner.go:211] docker network inspect no-preload-397460 returned with exit code 1
	I1101 09:18:48.082494  232578 network_create.go:287] error running [docker network inspect no-preload-397460]: docker network inspect no-preload-397460: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-397460 not found
	I1101 09:18:48.082510  232578 network_create.go:289] output of [docker network inspect no-preload-397460]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-397460 not found
	
	** /stderr **
	I1101 09:18:48.082636  232578 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:18:48.101786  232578 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5f44df6b5a5b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:38:92:20:b3:ae} reservation:<nil>}
	I1101 09:18:48.102316  232578 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ec772021a1d5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:14:7e:99:b1:e5} reservation:<nil>}
	I1101 09:18:48.102799  232578 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6ef14c0d2e1a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:5b:36:d5:85:2b} reservation:<nil>}
	I1101 09:18:48.103373  232578 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7590eb4da29e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:9a:f4:08:b0:bc:cf} reservation:<nil>}
	I1101 09:18:48.103644  232578 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-c9feba7a919c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a6:96:07:ef:ec:1e} reservation:<nil>}
	I1101 09:18:48.104307  232578 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003e8f70}
	I1101 09:18:48.104337  232578 network_create.go:124] attempt to create docker network no-preload-397460 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1101 09:18:48.104395  232578 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-397460 no-preload-397460
	I1101 09:18:48.167299  232578 network_create.go:108] docker network no-preload-397460 192.168.94.0/24 created
	I1101 09:18:48.167328  232578 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-397460" container
	I1101 09:18:48.167385  232578 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:18:48.183369  232578 cache.go:162] opening:  /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1101 09:18:48.183754  232578 cache.go:162] opening:  /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1101 09:18:48.186448  232578 cache.go:162] opening:  /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1101 09:18:48.187325  232578 cli_runner.go:164] Run: docker volume create no-preload-397460 --label name.minikube.sigs.k8s.io=no-preload-397460 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:18:48.187573  232578 cache.go:162] opening:  /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1101 09:18:48.197031  232578 cache.go:162] opening:  /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1101 09:18:48.201167  232578 cache.go:162] opening:  /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1101 09:18:48.207302  232578 oci.go:103] Successfully created a docker volume no-preload-397460
	I1101 09:18:48.207374  232578 cli_runner.go:164] Run: docker run --rm --name no-preload-397460-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-397460 --entrypoint /usr/bin/test -v no-preload-397460:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:18:48.262896  232578 cache.go:157] /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1101 09:18:48.262925  232578 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 246.582353ms
	I1101 09:18:48.262941  232578 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1101 09:18:48.281497  232578 cache.go:162] opening:  /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1101 09:18:48.604416  232578 cache.go:157] /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1101 09:18:48.604450  232578 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 588.218782ms
	I1101 09:18:48.604465  232578 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1101 09:18:48.686024  232578 oci.go:107] Successfully prepared a docker volume no-preload-397460
	I1101 09:18:48.686056  232578 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1101 09:18:48.686162  232578 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 09:18:48.686211  232578 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 09:18:48.686264  232578 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:18:48.755077  232578 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-397460 --name no-preload-397460 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-397460 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-397460 --network no-preload-397460 --ip 192.168.94.2 --volume no-preload-397460:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:18:49.047963  232578 cli_runner.go:164] Run: docker container inspect no-preload-397460 --format={{.State.Running}}
	I1101 09:18:49.071032  232578 cli_runner.go:164] Run: docker container inspect no-preload-397460 --format={{.State.Status}}
	I1101 09:18:49.093709  232578 cli_runner.go:164] Run: docker exec no-preload-397460 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:18:49.143155  232578 oci.go:144] the created container "no-preload-397460" has a running status.
	I1101 09:18:49.143192  232578 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/no-preload-397460/id_rsa...
	I1101 09:18:49.315124  232578 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-5913/.minikube/machines/no-preload-397460/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:18:49.355579  232578 cli_runner.go:164] Run: docker container inspect no-preload-397460 --format={{.State.Status}}
	I1101 09:18:49.386112  232578 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:18:49.386137  232578 kic_runner.go:114] Args: [docker exec --privileged no-preload-397460 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:18:49.451934  232578 cli_runner.go:164] Run: docker container inspect no-preload-397460 --format={{.State.Status}}
	I1101 09:18:49.512557  232578 machine.go:94] provisionDockerMachine start ...
	I1101 09:18:49.512664  232578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:18:49.539724  232578 main.go:143] libmachine: Using SSH client type: native
	I1101 09:18:49.540588  232578 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1101 09:18:49.540613  232578 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:18:49.559455  232578 cache.go:157] /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1101 09:18:49.559492  232578 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.543173529s
	I1101 09:18:49.559508  232578 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1101 09:18:49.675956  232578 cache.go:157] /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1101 09:18:49.675989  232578 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.659663639s
	I1101 09:18:49.676013  232578 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1101 09:18:49.680733  232578 cache.go:157] /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1101 09:18:49.680765  232578 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.664410846s
	I1101 09:18:49.680780  232578 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1101 09:18:49.701263  232578 cache.go:157] /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1101 09:18:49.701293  232578 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.684972714s
	I1101 09:18:49.701309  232578 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1101 09:18:49.715559  232578 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-397460
	
	I1101 09:18:49.715589  232578 ubuntu.go:182] provisioning hostname "no-preload-397460"
	I1101 09:18:49.715659  232578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:18:49.744732  232578 main.go:143] libmachine: Using SSH client type: native
	I1101 09:18:49.745115  232578 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1101 09:18:49.745136  232578 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-397460 && echo "no-preload-397460" | sudo tee /etc/hostname
	I1101 09:18:49.987808  232578 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-397460
	
	I1101 09:18:49.987945  232578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:18:50.011548  232578 main.go:143] libmachine: Using SSH client type: native
	I1101 09:18:50.011921  232578 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1101 09:18:50.011952  232578 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-397460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-397460/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-397460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:18:50.021495  232578 cache.go:157] /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1101 09:18:50.021525  232578 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.005281537s
	I1101 09:18:50.021539  232578 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1101 09:18:50.021560  232578 cache.go:87] Successfully saved all images to host disk.
	I1101 09:18:50.153702  232578 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:18:50.153732  232578 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5913/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5913/.minikube}
	I1101 09:18:50.153756  232578 ubuntu.go:190] setting up certificates
	I1101 09:18:50.153768  232578 provision.go:84] configureAuth start
	I1101 09:18:50.153826  232578 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-397460
	I1101 09:18:50.173973  232578 provision.go:143] copyHostCerts
	I1101 09:18:50.174041  232578 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem, removing ...
	I1101 09:18:50.174056  232578 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem
	I1101 09:18:50.174140  232578 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem (1078 bytes)
	I1101 09:18:50.174254  232578 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem, removing ...
	I1101 09:18:50.174267  232578 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem
	I1101 09:18:50.174312  232578 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem (1123 bytes)
	I1101 09:18:50.174388  232578 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem, removing ...
	I1101 09:18:50.174403  232578 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem
	I1101 09:18:50.174440  232578 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem (1675 bytes)
	I1101 09:18:50.174510  232578 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem org=jenkins.no-preload-397460 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-397460]
	I1101 09:18:50.502350  232578 provision.go:177] copyRemoteCerts
	I1101 09:18:50.502410  232578 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:18:50.502444  232578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:18:50.521506  232578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/no-preload-397460/id_rsa Username:docker}
	I1101 09:18:50.623800  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:18:50.644213  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 09:18:50.663300  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:18:50.682573  232578 provision.go:87] duration metric: took 528.789595ms to configureAuth
	I1101 09:18:50.682610  232578 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:18:50.682817  232578 config.go:182] Loaded profile config "no-preload-397460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:18:50.682982  232578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:18:50.704373  232578 main.go:143] libmachine: Using SSH client type: native
	I1101 09:18:50.704680  232578 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1101 09:18:50.704724  232578 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:18:50.963895  232578 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:18:50.963924  232578 machine.go:97] duration metric: took 1.451341274s to provisionDockerMachine
	I1101 09:18:50.963938  232578 client.go:176] duration metric: took 2.921680966s to LocalClient.Create
	I1101 09:18:50.963966  232578 start.go:167] duration metric: took 2.921742683s to libmachine.API.Create "no-preload-397460"
	I1101 09:18:50.963982  232578 start.go:293] postStartSetup for "no-preload-397460" (driver="docker")
	I1101 09:18:50.963999  232578 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:18:50.964086  232578 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:18:50.964133  232578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:18:50.984127  232578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/no-preload-397460/id_rsa Username:docker}
	I1101 09:18:51.087405  232578 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:18:51.091099  232578 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:18:51.091128  232578 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:18:51.091139  232578 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/addons for local assets ...
	I1101 09:18:51.091199  232578 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/files for local assets ...
	I1101 09:18:51.091291  232578 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem -> 94142.pem in /etc/ssl/certs
	I1101 09:18:51.091408  232578 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:18:51.100181  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:18:51.122728  232578 start.go:296] duration metric: took 158.729743ms for postStartSetup
	I1101 09:18:51.123123  232578 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-397460
	I1101 09:18:51.144645  232578 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/config.json ...
	I1101 09:18:51.145022  232578 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:18:51.145088  232578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:18:51.167110  232578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/no-preload-397460/id_rsa Username:docker}
	I1101 09:18:51.266222  232578 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:18:51.271960  232578 start.go:128] duration metric: took 3.232384764s to createHost
	I1101 09:18:51.271997  232578 start.go:83] releasing machines lock for "no-preload-397460", held for 3.232516117s
	I1101 09:18:51.272066  232578 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-397460
	I1101 09:18:51.291910  232578 ssh_runner.go:195] Run: cat /version.json
	I1101 09:18:51.291958  232578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:18:51.291957  232578 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:18:51.292034  232578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:18:51.312571  232578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/no-preload-397460/id_rsa Username:docker}
	I1101 09:18:51.314251  232578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/no-preload-397460/id_rsa Username:docker}
	I1101 09:18:51.414372  232578 ssh_runner.go:195] Run: systemctl --version
	I1101 09:18:51.478805  232578 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:18:51.514115  232578 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:18:51.519171  232578 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:18:51.519259  232578 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:18:51.546241  232578 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:18:51.546269  232578 start.go:496] detecting cgroup driver to use...
	I1101 09:18:51.546306  232578 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:18:51.546360  232578 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:18:51.563279  232578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:18:51.576636  232578 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:18:51.576688  232578 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:18:51.594387  232578 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:18:51.613152  232578 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:18:51.697365  232578 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:18:51.790192  232578 docker.go:234] disabling docker service ...
	I1101 09:18:51.790263  232578 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:18:51.810654  232578 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:18:51.825966  232578 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:18:51.911751  232578 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:18:51.999045  232578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:18:52.012480  232578 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:18:52.027219  232578 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:18:52.027280  232578 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:18:52.038415  232578 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:18:52.038478  232578 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:18:52.047937  232578 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:18:52.057267  232578 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:18:52.066745  232578 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:18:52.075472  232578 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:18:52.084680  232578 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:18:52.098952  232578 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:18:52.108372  232578 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:18:52.116196  232578 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:18:52.123662  232578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:18:52.222930  232578 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:18:52.352797  232578 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:18:52.352881  232578 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:18:52.357137  232578 start.go:564] Will wait 60s for crictl version
	I1101 09:18:52.357203  232578 ssh_runner.go:195] Run: which crictl
	I1101 09:18:52.361091  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:18:52.387223  232578 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:18:52.387308  232578 ssh_runner.go:195] Run: crio --version
	I1101 09:18:52.420396  232578 ssh_runner.go:195] Run: crio --version
	I1101 09:18:52.452519  232578 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:18:52.454204  232578 cli_runner.go:164] Run: docker network inspect no-preload-397460 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:18:52.472063  232578 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1101 09:18:52.476475  232578 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:18:52.487232  232578 kubeadm.go:884] updating cluster {Name:no-preload-397460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-397460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:18:52.487339  232578 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:18:52.487379  232578 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:18:52.513829  232578 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 09:18:52.513860  232578 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 09:18:52.513930  232578 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 09:18:52.513965  232578 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:18:52.513974  232578 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 09:18:52.513986  232578 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 09:18:52.514010  232578 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1101 09:18:52.513996  232578 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 09:18:52.514081  232578 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 09:18:52.513967  232578 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1101 09:18:52.515131  232578 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:18:52.515143  232578 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 09:18:52.515151  232578 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1101 09:18:52.515172  232578 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 09:18:52.515188  232578 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1101 09:18:52.515189  232578 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 09:18:52.515204  232578 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 09:18:52.515136  232578 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 09:18:52.669247  232578 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1101 09:18:52.669278  232578 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 09:18:52.678470  232578 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1101 09:18:52.681018  232578 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1101 09:18:52.684168  232578 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1101 09:18:52.719382  232578 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1101 09:18:52.763648  232578 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1101 09:18:52.763671  232578 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1101 09:18:52.763694  232578 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1101 09:18:52.763708  232578 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 09:18:52.763739  232578 ssh_runner.go:195] Run: which crictl
	I1101 09:18:52.763760  232578 ssh_runner.go:195] Run: which crictl
	I1101 09:18:52.763756  232578 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1101 09:18:52.763814  232578 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1101 09:18:52.763841  232578 ssh_runner.go:195] Run: which crictl
	I1101 09:18:52.763846  232578 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1101 09:18:52.763892  232578 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1101 09:18:52.763935  232578 ssh_runner.go:195] Run: which crictl
	I1101 09:18:52.763960  232578 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1101 09:18:52.763999  232578 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1101 09:18:52.764010  232578 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1101 09:18:52.764031  232578 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1101 09:18:52.764038  232578 ssh_runner.go:195] Run: which crictl
	I1101 09:18:52.764059  232578 ssh_runner.go:195] Run: which crictl
	I1101 09:18:52.768578  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1101 09:18:52.770016  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 09:18:52.770047  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 09:18:52.799784  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 09:18:52.799796  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1101 09:18:52.799817  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 09:18:52.799927  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1101 09:18:52.800634  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 09:18:52.800634  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 09:18:51.371407  230484 out.go:252]   - Generating certificates and keys ...
	I1101 09:18:51.371525  230484 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:18:51.371628  230484 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:18:51.526613  230484 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:18:51.692180  230484 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:18:51.817273  230484 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:18:51.981588  230484 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:18:52.068502  230484 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:18:52.068714  230484 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-152344] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1101 09:18:52.277246  230484 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:18:52.277415  230484 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-152344] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1101 09:18:52.406061  230484 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:18:52.687395  230484 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:18:52.844385  230484 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:18:52.844819  230484 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:18:53.101440  230484 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:18:53.554276  230484 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:18:53.716333  230484 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:18:54.011395  230484 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:18:54.012005  230484 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:18:54.017382  230484 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:18:54.020499  230484 out.go:252]   - Booting up control plane ...
	I1101 09:18:54.020608  230484 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:18:54.020724  230484 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:18:54.021362  230484 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:18:54.036349  230484 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:18:54.037443  230484 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:18:54.037548  230484 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:18:54.161780  230484 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 09:18:51.548858  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 09:18:51.548917  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:18:52.836922  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 09:18:52.836925  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1101 09:18:52.837003  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1101 09:18:52.837685  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 09:18:52.839909  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1101 09:18:52.839925  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1101 09:18:52.872417  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1101 09:18:52.874689  232578 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1101 09:18:52.874750  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1101 09:18:52.874765  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1101 09:18:52.874787  232578 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1101 09:18:52.878395  232578 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1101 09:18:52.878443  232578 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1101 09:18:52.878490  232578 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1101 09:18:52.878520  232578 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1101 09:18:52.904013  232578 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1101 09:18:52.904126  232578 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1101 09:18:52.904383  232578 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1101 09:18:52.904465  232578 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1101 09:18:52.906499  232578 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1101 09:18:52.906513  232578 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1101 09:18:52.906532  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1101 09:18:52.906605  232578 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1101 09:18:52.906632  232578 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1101 09:18:52.906660  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1101 09:18:52.906686  232578 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1101 09:18:52.906710  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1101 09:18:52.915461  232578 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1101 09:18:52.915505  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1101 09:18:52.920462  232578 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1101 09:18:52.921495  232578 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1101 09:18:52.921531  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1101 09:18:52.922696  232578 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1101 09:18:52.922730  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1101 09:18:52.999323  232578 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:18:53.043171  232578 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1101 09:18:53.043218  232578 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1101 09:18:53.043267  232578 ssh_runner.go:195] Run: which crictl
	I1101 09:18:53.077989  232578 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1101 09:18:53.078061  232578 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1101 09:18:53.119054  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1101 09:18:53.119712  232578 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1101 09:18:53.119757  232578 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:18:53.119800  232578 ssh_runner.go:195] Run: which crictl
	I1101 09:18:53.595088  232578 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1101 09:18:53.595151  232578 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1101 09:18:53.595194  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:18:53.595247  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1101 09:18:53.595200  232578 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1101 09:18:54.752378  232578 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.15714584s)
	I1101 09:18:54.752445  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:18:54.752506  232578 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.157220007s)
	I1101 09:18:54.752533  232578 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1101 09:18:54.752557  232578 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1101 09:18:54.752592  232578 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1: (1.157319193s)
	I1101 09:18:54.752664  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1101 09:18:54.752598  232578 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1101 09:18:54.782581  232578 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:18:54.782955  232578 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1101 09:18:54.783052  232578 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 09:18:56.054682  232578 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.301925191s)
	I1101 09:18:56.054718  232578 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1101 09:18:56.054737  232578 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1101 09:18:56.054746  232578 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.272129159s)
	I1101 09:18:56.054791  232578 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1101 09:18:56.054793  232578 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1101 09:18:56.054790  232578 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.271714096s)
	I1101 09:18:56.054937  232578 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1101 09:18:56.054957  232578 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1101 09:18:56.054964  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1101 09:18:57.360444  232578 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.305625285s)
	I1101 09:18:57.360481  232578 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1101 09:18:57.360502  232578 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1101 09:18:57.360514  232578 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.305535558s)
	I1101 09:18:57.360551  232578 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1101 09:18:57.360552  232578 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1101 09:18:57.360579  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1101 09:18:59.164409  230484 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.002796 seconds
	I1101 09:18:59.164574  230484 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:18:59.176340  230484 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:18:56.549154  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 09:18:56.549261  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:18:56.549327  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:18:56.580741  216020 cri.go:89] found id: "d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3"
	I1101 09:18:56.580771  216020 cri.go:89] found id: "eabdbeb5890947bf53852620457589bd21197fc78d28db737be101d8bb385f10"
	I1101 09:18:56.580777  216020 cri.go:89] found id: ""
	I1101 09:18:56.580787  216020 logs.go:282] 2 containers: [d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3 eabdbeb5890947bf53852620457589bd21197fc78d28db737be101d8bb385f10]
	I1101 09:18:56.580839  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:18:56.585841  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:18:56.590268  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:18:56.590333  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:18:56.621247  216020 cri.go:89] found id: ""
	I1101 09:18:56.621279  216020 logs.go:282] 0 containers: []
	W1101 09:18:56.621289  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:18:56.621296  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:18:56.621349  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:18:56.654560  216020 cri.go:89] found id: ""
	I1101 09:18:56.654588  216020 logs.go:282] 0 containers: []
	W1101 09:18:56.654598  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:18:56.654605  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:18:56.654665  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:18:56.689520  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:18:56.689637  216020 cri.go:89] found id: ""
	I1101 09:18:56.689653  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:18:56.689715  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:18:56.695841  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:18:56.695924  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:18:56.736560  216020 cri.go:89] found id: ""
	I1101 09:18:56.736587  216020 logs.go:282] 0 containers: []
	W1101 09:18:56.736598  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:18:56.736605  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:18:56.736657  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:18:56.778276  216020 cri.go:89] found id: "40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0"
	I1101 09:18:56.778302  216020 cri.go:89] found id: "695fb2f4a2bd5c728b61d2d503f417c54b1088ddb5902430c9fd42b8fcf8289f"
	I1101 09:18:56.778307  216020 cri.go:89] found id: ""
	I1101 09:18:56.778318  216020 logs.go:282] 2 containers: [40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0 695fb2f4a2bd5c728b61d2d503f417c54b1088ddb5902430c9fd42b8fcf8289f]
	I1101 09:18:56.778380  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:18:56.784000  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:18:56.789066  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:18:56.789130  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:18:56.827787  216020 cri.go:89] found id: ""
	I1101 09:18:56.827996  216020 logs.go:282] 0 containers: []
	W1101 09:18:56.828013  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:18:56.828022  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:18:56.828202  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:18:56.868479  216020 cri.go:89] found id: ""
	I1101 09:18:56.868553  216020 logs.go:282] 0 containers: []
	W1101 09:18:56.868565  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:18:56.868583  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:18:56.868601  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1101 09:18:59.700832  230484 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:18:59.701131  230484 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-152344 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:19:00.213656  230484 kubeadm.go:319] [bootstrap-token] Using token: ug065k.uu8y2cwye7dn303f
	I1101 09:19:00.218318  230484 out.go:252]   - Configuring RBAC rules ...
	I1101 09:19:00.218514  230484 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:19:00.223905  230484 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:19:00.232836  230484 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:19:00.236330  230484 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:19:00.239577  230484 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:19:00.242951  230484 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:19:00.258724  230484 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:19:00.483894  230484 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:19:00.628978  230484 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:19:00.630349  230484 kubeadm.go:319] 
	I1101 09:19:00.630439  230484 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:19:00.630446  230484 kubeadm.go:319] 
	I1101 09:19:00.630535  230484 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:19:00.630543  230484 kubeadm.go:319] 
	I1101 09:19:00.630575  230484 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:19:00.630651  230484 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:19:00.630726  230484 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:19:00.630740  230484 kubeadm.go:319] 
	I1101 09:19:00.630806  230484 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:19:00.630814  230484 kubeadm.go:319] 
	I1101 09:19:00.630897  230484 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:19:00.630904  230484 kubeadm.go:319] 
	I1101 09:19:00.630971  230484 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:19:00.631070  230484 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:19:00.631159  230484 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:19:00.631165  230484 kubeadm.go:319] 
	I1101 09:19:00.631263  230484 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:19:00.631368  230484 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:19:00.631375  230484 kubeadm.go:319] 
	I1101 09:19:00.631476  230484 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ug065k.uu8y2cwye7dn303f \
	I1101 09:19:00.631610  230484 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 \
	I1101 09:19:00.631661  230484 kubeadm.go:319] 	--control-plane 
	I1101 09:19:00.631667  230484 kubeadm.go:319] 
	I1101 09:19:00.631778  230484 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:19:00.631787  230484 kubeadm.go:319] 
	I1101 09:19:00.631933  230484 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ug065k.uu8y2cwye7dn303f \
	I1101 09:19:00.632139  230484 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 
	I1101 09:19:00.635186  230484 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 09:19:00.635357  230484 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:19:00.635385  230484 cni.go:84] Creating CNI manager for ""
	I1101 09:19:00.635398  230484 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:19:00.641212  230484 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:18:58.698642  232578 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.338059477s)
	I1101 09:18:58.698675  232578 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1101 09:18:58.698705  232578 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1101 09:18:58.698803  232578 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1101 09:19:02.358768  232578 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.659936402s)
	I1101 09:19:02.358805  232578 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1101 09:19:02.358838  232578 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 09:19:02.358916  232578 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1101 09:19:00.642748  230484 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:19:00.648283  230484 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1101 09:19:00.648304  230484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:19:00.678897  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:19:01.420736  230484 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:19:01.420838  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:01.420838  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-152344 minikube.k8s.io/updated_at=2025_11_01T09_19_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=old-k8s-version-152344 minikube.k8s.io/primary=true
	I1101 09:19:01.432832  230484 ops.go:34] apiserver oom_adj: -16
	I1101 09:19:01.500449  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:02.000774  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:02.501173  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:03.001396  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:03.501535  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:04.000922  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:03.757127  232578 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.398179562s)
	I1101 09:19:03.757153  232578 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1101 09:19:03.757179  232578 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1101 09:19:03.757225  232578 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1101 09:19:04.304350  232578 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1101 09:19:04.304385  232578 cache_images.go:125] Successfully loaded all cached images
	I1101 09:19:04.304392  232578 cache_images.go:94] duration metric: took 11.790505022s to LoadCachedImages
	I1101 09:19:04.304406  232578 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1101 09:19:04.304487  232578 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-397460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-397460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:19:04.304548  232578 ssh_runner.go:195] Run: crio config
	I1101 09:19:04.352149  232578 cni.go:84] Creating CNI manager for ""
	I1101 09:19:04.352171  232578 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:19:04.352189  232578 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:19:04.352209  232578 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-397460 NodeName:no-preload-397460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:19:04.352324  232578 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-397460"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:19:04.352388  232578 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:19:04.361159  232578 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1101 09:19:04.361225  232578 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1101 09:19:04.370193  232578 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1101 09:19:04.370268  232578 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1101 09:19:04.370288  232578 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1101 09:19:04.370304  232578 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1101 09:19:04.374900  232578 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1101 09:19:04.374929  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1101 09:19:05.099069  232578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:19:05.112544  232578 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1101 09:19:05.116959  232578 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1101 09:19:05.116994  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1101 09:19:05.196444  232578 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1101 09:19:05.203362  232578 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1101 09:19:05.203398  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1101 09:19:05.443985  232578 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:19:05.452299  232578 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 09:19:05.465796  232578 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:19:05.482028  232578 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1101 09:19:05.495067  232578 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:19:05.498989  232578 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:19:05.510594  232578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:19:05.600346  232578 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:19:05.626579  232578 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460 for IP: 192.168.94.2
	I1101 09:19:05.626603  232578 certs.go:195] generating shared ca certs ...
	I1101 09:19:05.626625  232578 certs.go:227] acquiring lock for ca certs: {Name:mkfdee6a84670347521013ebeef165551380cb9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:05.626805  232578 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key
	I1101 09:19:05.626879  232578 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key
	I1101 09:19:05.626896  232578 certs.go:257] generating profile certs ...
	I1101 09:19:05.626960  232578 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/client.key
	I1101 09:19:05.626972  232578 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/client.crt with IP's: []
	I1101 09:19:05.761884  232578 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/client.crt ...
	I1101 09:19:05.761917  232578 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/client.crt: {Name:mk0d583f778afaba6b073aa4b091a4919d55643f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:05.762097  232578 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/client.key ...
	I1101 09:19:05.762114  232578 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/client.key: {Name:mka219b59a9f16438e999f6fbe69dcd9a83e22fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:05.762230  232578 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/apiserver.key.7741ef4f
	I1101 09:19:05.762248  232578 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/apiserver.crt.7741ef4f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1101 09:19:06.568245  232578 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/apiserver.crt.7741ef4f ...
	I1101 09:19:06.568281  232578 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/apiserver.crt.7741ef4f: {Name:mk7ae0622f9462bd28ff798e9063deb6cead5f5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:06.568498  232578 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/apiserver.key.7741ef4f ...
	I1101 09:19:06.568518  232578 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/apiserver.key.7741ef4f: {Name:mk43d77d2273d17dfe4475fa484e368018a0ab6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:06.568620  232578 certs.go:382] copying /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/apiserver.crt.7741ef4f -> /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/apiserver.crt
	I1101 09:19:06.568715  232578 certs.go:386] copying /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/apiserver.key.7741ef4f -> /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/apiserver.key
	I1101 09:19:06.568786  232578 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/proxy-client.key
	I1101 09:19:06.568810  232578 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/proxy-client.crt with IP's: []
	I1101 09:19:06.646638  232578 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/proxy-client.crt ...
	I1101 09:19:06.646678  232578 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/proxy-client.crt: {Name:mk155fa51744701a13d09202298407fa2b3687b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:06.646891  232578 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/proxy-client.key ...
	I1101 09:19:06.646913  232578 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/proxy-client.key: {Name:mk248cdd33afb3e22ce780f6e38a297bb345bd71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:06.647132  232578 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem (1338 bytes)
	W1101 09:19:06.647175  232578 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414_empty.pem, impossibly tiny 0 bytes
	I1101 09:19:06.647182  232578 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:19:06.647205  232578 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:19:06.647224  232578 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:19:06.647245  232578 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem (1675 bytes)
	I1101 09:19:06.647282  232578 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:19:06.647910  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:19:06.667422  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:19:06.686153  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:19:06.704443  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:19:06.722677  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 09:19:06.743662  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:19:06.762425  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:19:06.781112  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:19:06.800135  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem --> /usr/share/ca-certificates/9414.pem (1338 bytes)
	I1101 09:19:06.820226  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /usr/share/ca-certificates/94142.pem (1708 bytes)
	I1101 09:19:06.838305  232578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:19:06.856411  232578 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:19:06.869849  232578 ssh_runner.go:195] Run: openssl version
	I1101 09:19:06.876703  232578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9414.pem && ln -fs /usr/share/ca-certificates/9414.pem /etc/ssl/certs/9414.pem"
	I1101 09:19:06.885783  232578 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9414.pem
	I1101 09:19:06.889916  232578 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:35 /usr/share/ca-certificates/9414.pem
	I1101 09:19:06.889982  232578 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9414.pem
	I1101 09:19:06.928704  232578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9414.pem /etc/ssl/certs/51391683.0"
	I1101 09:19:06.938123  232578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94142.pem && ln -fs /usr/share/ca-certificates/94142.pem /etc/ssl/certs/94142.pem"
	I1101 09:19:06.947248  232578 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94142.pem
	I1101 09:19:06.951369  232578 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:35 /usr/share/ca-certificates/94142.pem
	I1101 09:19:06.951416  232578 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94142.pem
	I1101 09:19:06.988968  232578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94142.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:19:06.998575  232578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:19:07.008348  232578 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:19:07.012958  232578 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:19:07.013026  232578 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:19:07.049608  232578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:19:07.060738  232578 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:19:07.065271  232578 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:19:07.065338  232578 kubeadm.go:401] StartCluster: {Name:no-preload-397460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-397460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:19:07.065421  232578 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:19:07.065475  232578 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:19:07.096173  232578 cri.go:89] found id: ""
	I1101 09:19:07.096238  232578 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:19:07.104841  232578 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:19:07.113417  232578 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:19:07.113487  232578 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:19:07.122450  232578 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:19:07.122468  232578 kubeadm.go:158] found existing configuration files:
	
	I1101 09:19:07.122509  232578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:19:07.131458  232578 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:19:07.131518  232578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:19:07.139995  232578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:19:07.149547  232578 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:19:07.149608  232578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:19:07.158030  232578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:19:07.167281  232578 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:19:07.167330  232578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:19:07.176253  232578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:19:07.185643  232578 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:19:07.185719  232578 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:19:07.194906  232578 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:19:07.253288  232578 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 09:19:07.311118  232578 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:19:04.500700  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:05.001489  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:05.500983  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:06.000604  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:06.500745  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:07.000586  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:07.501085  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:08.000538  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:08.500761  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:09.000599  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:07.925734  216020 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (11.057089688s)
	W1101 09:19:07.925783  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:48532->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:48532->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1101 09:19:07.925799  216020 logs.go:123] Gathering logs for kube-apiserver [eabdbeb5890947bf53852620457589bd21197fc78d28db737be101d8bb385f10] ...
	I1101 09:19:07.925812  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eabdbeb5890947bf53852620457589bd21197fc78d28db737be101d8bb385f10"
	W1101 09:19:07.953581  216020 logs.go:130] failed kube-apiserver [eabdbeb5890947bf53852620457589bd21197fc78d28db737be101d8bb385f10]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eabdbeb5890947bf53852620457589bd21197fc78d28db737be101d8bb385f10" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 eabdbeb5890947bf53852620457589bd21197fc78d28db737be101d8bb385f10": Process exited with status 1
	stdout:
	
	stderr:
	E1101 09:19:07.951109    1208 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eabdbeb5890947bf53852620457589bd21197fc78d28db737be101d8bb385f10\": container with ID starting with eabdbeb5890947bf53852620457589bd21197fc78d28db737be101d8bb385f10 not found: ID does not exist" containerID="eabdbeb5890947bf53852620457589bd21197fc78d28db737be101d8bb385f10"
	time="2025-11-01T09:19:07Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"eabdbeb5890947bf53852620457589bd21197fc78d28db737be101d8bb385f10\": container with ID starting with eabdbeb5890947bf53852620457589bd21197fc78d28db737be101d8bb385f10 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1101 09:19:07.951109    1208 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eabdbeb5890947bf53852620457589bd21197fc78d28db737be101d8bb385f10\": container with ID starting with eabdbeb5890947bf53852620457589bd21197fc78d28db737be101d8bb385f10 not found: ID does not exist" containerID="eabdbeb5890947bf53852620457589bd21197fc78d28db737be101d8bb385f10"
	time="2025-11-01T09:19:07Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"eabdbeb5890947bf53852620457589bd21197fc78d28db737be101d8bb385f10\": container with ID starting with eabdbeb5890947bf53852620457589bd21197fc78d28db737be101d8bb385f10 not found: ID does not exist"
	
	** /stderr **
	I1101 09:19:07.953610  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:19:07.953623  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:19:07.997251  216020 logs.go:123] Gathering logs for kube-controller-manager [40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0] ...
	I1101 09:19:07.997283  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0"
	I1101 09:19:08.031299  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:19:08.031334  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:19:08.048543  216020 logs.go:123] Gathering logs for kube-apiserver [d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3] ...
	I1101 09:19:08.048579  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3"
	I1101 09:19:08.084712  216020 logs.go:123] Gathering logs for kube-controller-manager [695fb2f4a2bd5c728b61d2d503f417c54b1088ddb5902430c9fd42b8fcf8289f] ...
	I1101 09:19:08.084744  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 695fb2f4a2bd5c728b61d2d503f417c54b1088ddb5902430c9fd42b8fcf8289f"
	W1101 09:19:08.112082  216020 logs.go:130] failed kube-controller-manager [695fb2f4a2bd5c728b61d2d503f417c54b1088ddb5902430c9fd42b8fcf8289f]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 695fb2f4a2bd5c728b61d2d503f417c54b1088ddb5902430c9fd42b8fcf8289f" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 695fb2f4a2bd5c728b61d2d503f417c54b1088ddb5902430c9fd42b8fcf8289f": Process exited with status 1
	stdout:
	
	stderr:
	E1101 09:19:08.109288    1258 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"695fb2f4a2bd5c728b61d2d503f417c54b1088ddb5902430c9fd42b8fcf8289f\": container with ID starting with 695fb2f4a2bd5c728b61d2d503f417c54b1088ddb5902430c9fd42b8fcf8289f not found: ID does not exist" containerID="695fb2f4a2bd5c728b61d2d503f417c54b1088ddb5902430c9fd42b8fcf8289f"
	time="2025-11-01T09:19:08Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"695fb2f4a2bd5c728b61d2d503f417c54b1088ddb5902430c9fd42b8fcf8289f\": container with ID starting with 695fb2f4a2bd5c728b61d2d503f417c54b1088ddb5902430c9fd42b8fcf8289f not found: ID does not exist"
	 output: 
	** stderr ** 
	E1101 09:19:08.109288    1258 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"695fb2f4a2bd5c728b61d2d503f417c54b1088ddb5902430c9fd42b8fcf8289f\": container with ID starting with 695fb2f4a2bd5c728b61d2d503f417c54b1088ddb5902430c9fd42b8fcf8289f not found: ID does not exist" containerID="695fb2f4a2bd5c728b61d2d503f417c54b1088ddb5902430c9fd42b8fcf8289f"
	time="2025-11-01T09:19:08Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"695fb2f4a2bd5c728b61d2d503f417c54b1088ddb5902430c9fd42b8fcf8289f\": container with ID starting with 695fb2f4a2bd5c728b61d2d503f417c54b1088ddb5902430c9fd42b8fcf8289f not found: ID does not exist"
	
	** /stderr **
	I1101 09:19:08.112104  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:19:08.112119  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:19:08.153371  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:19:08.153407  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:19:08.185188  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:19:08.185216  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:19:09.501163  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:10.001022  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:10.501001  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:11.001132  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:11.500721  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:12.001377  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:12.501077  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:13.001212  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:13.501273  230484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:13.635681  230484 kubeadm.go:1114] duration metric: took 12.214908528s to wait for elevateKubeSystemPrivileges
	I1101 09:19:13.635733  230484 kubeadm.go:403] duration metric: took 22.703219354s to StartCluster
	I1101 09:19:13.635755  230484 settings.go:142] acquiring lock: {Name:mkb1ba7d0d4bb15f3f0746ce486d72703f901580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:13.635823  230484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:19:13.637611  230484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:13.637931  230484 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:19:13.637945  230484 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:19:13.638030  230484 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-152344"
	I1101 09:19:13.638050  230484 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-152344"
	I1101 09:19:13.638074  230484 host.go:66] Checking if "old-k8s-version-152344" exists ...
	I1101 09:19:13.637921  230484 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:19:13.638162  230484 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-152344"
	I1101 09:19:13.638185  230484 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-152344"
	I1101 09:19:13.638148  230484 config.go:182] Loaded profile config "old-k8s-version-152344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:19:13.638514  230484 cli_runner.go:164] Run: docker container inspect old-k8s-version-152344 --format={{.State.Status}}
	I1101 09:19:13.638624  230484 cli_runner.go:164] Run: docker container inspect old-k8s-version-152344 --format={{.State.Status}}
	I1101 09:19:13.639924  230484 out.go:179] * Verifying Kubernetes components...
	I1101 09:19:13.643998  230484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:19:13.674050  230484 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:19:13.675615  230484 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-152344"
	I1101 09:19:13.675716  230484 host.go:66] Checking if "old-k8s-version-152344" exists ...
	I1101 09:19:13.676285  230484 cli_runner.go:164] Run: docker container inspect old-k8s-version-152344 --format={{.State.Status}}
	I1101 09:19:13.676544  230484 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:19:13.676558  230484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:19:13.676602  230484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-152344
	I1101 09:19:13.703377  230484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/old-k8s-version-152344/id_rsa Username:docker}
	I1101 09:19:13.710096  230484 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:19:13.710295  230484 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:19:13.710837  230484 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-152344
	I1101 09:19:13.751928  230484 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/old-k8s-version-152344/id_rsa Username:docker}
	I1101 09:19:13.803600  230484 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:19:13.845017  230484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:19:13.886670  230484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:19:13.925480  230484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:19:14.237614  230484 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-152344" to be "Ready" ...
	I1101 09:19:14.238950  230484 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1101 09:19:14.482091  230484 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:19:10.752341  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:19:10.752831  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:19:10.752908  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:19:10.752961  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:19:10.782950  216020 cri.go:89] found id: "d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3"
	I1101 09:19:10.782977  216020 cri.go:89] found id: ""
	I1101 09:19:10.782987  216020 logs.go:282] 1 containers: [d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3]
	I1101 09:19:10.783043  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:10.787198  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:19:10.787272  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:19:10.815804  216020 cri.go:89] found id: ""
	I1101 09:19:10.815835  216020 logs.go:282] 0 containers: []
	W1101 09:19:10.815848  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:19:10.815856  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:19:10.815937  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:19:10.844165  216020 cri.go:89] found id: ""
	I1101 09:19:10.844187  216020 logs.go:282] 0 containers: []
	W1101 09:19:10.844195  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:19:10.844200  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:19:10.844243  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:19:10.873707  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:19:10.873733  216020 cri.go:89] found id: ""
	I1101 09:19:10.873743  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:19:10.873796  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:10.877963  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:19:10.878032  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:19:10.906638  216020 cri.go:89] found id: ""
	I1101 09:19:10.906678  216020 logs.go:282] 0 containers: []
	W1101 09:19:10.906688  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:19:10.906696  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:19:10.906758  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:19:10.936436  216020 cri.go:89] found id: "40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0"
	I1101 09:19:10.936461  216020 cri.go:89] found id: ""
	I1101 09:19:10.936471  216020 logs.go:282] 1 containers: [40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0]
	I1101 09:19:10.936521  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:10.941225  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:19:10.941296  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:19:10.971780  216020 cri.go:89] found id: ""
	I1101 09:19:10.971809  216020 logs.go:282] 0 containers: []
	W1101 09:19:10.971819  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:19:10.971827  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:19:10.971904  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:19:11.007144  216020 cri.go:89] found id: ""
	I1101 09:19:11.007174  216020 logs.go:282] 0 containers: []
	W1101 09:19:11.007186  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:19:11.007198  216020 logs.go:123] Gathering logs for kube-controller-manager [40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0] ...
	I1101 09:19:11.007228  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0"
	I1101 09:19:11.038155  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:19:11.038183  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:19:11.086906  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:19:11.086941  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:19:11.119426  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:19:11.119459  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:19:11.188434  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:19:11.188468  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:19:11.210664  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:19:11.210692  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:19:11.271011  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:19:11.271033  216020 logs.go:123] Gathering logs for kube-apiserver [d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3] ...
	I1101 09:19:11.271048  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3"
	I1101 09:19:11.307683  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:19:11.307725  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:19:13.854830  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:19:13.855234  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:19:13.855301  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:19:13.855362  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:19:13.897195  216020 cri.go:89] found id: "d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3"
	I1101 09:19:13.897222  216020 cri.go:89] found id: ""
	I1101 09:19:13.897231  216020 logs.go:282] 1 containers: [d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3]
	I1101 09:19:13.897284  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:13.905063  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:19:13.905201  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:19:13.952910  216020 cri.go:89] found id: ""
	I1101 09:19:13.952939  216020 logs.go:282] 0 containers: []
	W1101 09:19:13.952950  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:19:13.952957  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:19:13.953013  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:19:14.012768  216020 cri.go:89] found id: ""
	I1101 09:19:14.012796  216020 logs.go:282] 0 containers: []
	W1101 09:19:14.012807  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:19:14.012813  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:19:14.012914  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:19:14.077235  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:19:14.077262  216020 cri.go:89] found id: ""
	I1101 09:19:14.077272  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:19:14.077350  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:14.082736  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:19:14.082912  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:19:14.121245  216020 cri.go:89] found id: ""
	I1101 09:19:14.121269  216020 logs.go:282] 0 containers: []
	W1101 09:19:14.121277  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:19:14.121284  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:19:14.121335  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:19:14.164366  216020 cri.go:89] found id: "40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0"
	I1101 09:19:14.164389  216020 cri.go:89] found id: ""
	I1101 09:19:14.164398  216020 logs.go:282] 1 containers: [40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0]
	I1101 09:19:14.164450  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:14.170393  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:19:14.170470  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:19:14.222746  216020 cri.go:89] found id: ""
	I1101 09:19:14.222791  216020 logs.go:282] 0 containers: []
	W1101 09:19:14.222803  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:19:14.222827  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:19:14.222905  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:19:14.264445  216020 cri.go:89] found id: ""
	I1101 09:19:14.264477  216020 logs.go:282] 0 containers: []
	W1101 09:19:14.264489  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:19:14.264499  216020 logs.go:123] Gathering logs for kube-apiserver [d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3] ...
	I1101 09:19:14.264514  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3"
	I1101 09:19:14.319189  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:19:14.319229  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:19:14.382648  216020 logs.go:123] Gathering logs for kube-controller-manager [40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0] ...
	I1101 09:19:14.382690  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0"
	I1101 09:19:14.414562  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:19:14.414592  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:19:14.460321  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:19:14.460358  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:19:14.496996  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:19:14.497024  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:19:14.564435  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:19:14.564469  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:19:14.583564  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:19:14.583596  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:19:14.648164  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:19:17.692022  232578 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:19:17.692142  232578 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:19:17.692286  232578 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:19:17.692399  232578 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 09:19:17.692450  232578 kubeadm.go:319] OS: Linux
	I1101 09:19:17.692536  232578 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:19:17.692611  232578 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:19:17.692692  232578 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:19:17.692790  232578 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:19:17.692928  232578 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:19:17.692999  232578 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:19:17.693079  232578 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:19:17.693155  232578 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 09:19:17.693270  232578 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:19:17.693411  232578 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:19:17.693562  232578 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:19:17.693626  232578 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:19:17.694927  232578 out.go:252]   - Generating certificates and keys ...
	I1101 09:19:17.694993  232578 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:19:17.695067  232578 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:19:17.695139  232578 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:19:17.695190  232578 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:19:17.695247  232578 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:19:17.695290  232578 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:19:17.695357  232578 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:19:17.695553  232578 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-397460] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1101 09:19:17.695664  232578 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:19:17.695860  232578 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-397460] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1101 09:19:17.695970  232578 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:19:17.696072  232578 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:19:17.696134  232578 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:19:17.696216  232578 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:19:17.696292  232578 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:19:17.696386  232578 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:19:17.696442  232578 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:19:17.696528  232578 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:19:17.696603  232578 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:19:17.696744  232578 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:19:17.696851  232578 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:19:17.699140  232578 out.go:252]   - Booting up control plane ...
	I1101 09:19:17.699237  232578 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:19:17.699332  232578 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:19:17.699405  232578 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:19:17.699541  232578 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:19:17.699674  232578 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:19:17.699834  232578 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:19:17.700002  232578 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:19:17.700064  232578 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:19:17.700236  232578 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:19:17.700369  232578 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:19:17.700472  232578 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000983559s
	I1101 09:19:17.700621  232578 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:19:17.700741  232578 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1101 09:19:17.700914  232578 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:19:17.701041  232578 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:19:17.701144  232578 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.950373794s
	I1101 09:19:17.701252  232578 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.020997937s
	I1101 09:19:17.701346  232578 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001563116s
	I1101 09:19:17.701523  232578 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:19:17.701717  232578 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:19:17.701798  232578 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:19:17.702133  232578 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-397460 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:19:17.702225  232578 kubeadm.go:319] [bootstrap-token] Using token: 4rsro2.i9a4oyip340w3tet
	I1101 09:19:17.703493  232578 out.go:252]   - Configuring RBAC rules ...
	I1101 09:19:17.703651  232578 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:19:17.703775  232578 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:19:17.703951  232578 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:19:17.704135  232578 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:19:17.704319  232578 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:19:17.704454  232578 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:19:17.704636  232578 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:19:17.704712  232578 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:19:17.704762  232578 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:19:17.704771  232578 kubeadm.go:319] 
	I1101 09:19:17.704882  232578 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:19:17.704898  232578 kubeadm.go:319] 
	I1101 09:19:17.704985  232578 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:19:17.705000  232578 kubeadm.go:319] 
	I1101 09:19:17.705028  232578 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:19:17.705107  232578 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:19:17.705176  232578 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:19:17.705189  232578 kubeadm.go:319] 
	I1101 09:19:17.705261  232578 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:19:17.705274  232578 kubeadm.go:319] 
	I1101 09:19:17.705328  232578 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:19:17.705337  232578 kubeadm.go:319] 
	I1101 09:19:17.705397  232578 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:19:17.705483  232578 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:19:17.705552  232578 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:19:17.705560  232578 kubeadm.go:319] 
	I1101 09:19:17.705657  232578 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:19:17.705754  232578 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:19:17.705762  232578 kubeadm.go:319] 
	I1101 09:19:17.705881  232578 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4rsro2.i9a4oyip340w3tet \
	I1101 09:19:17.706013  232578 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 \
	I1101 09:19:17.706041  232578 kubeadm.go:319] 	--control-plane 
	I1101 09:19:17.706046  232578 kubeadm.go:319] 
	I1101 09:19:17.706147  232578 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:19:17.706173  232578 kubeadm.go:319] 
	I1101 09:19:17.706297  232578 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4rsro2.i9a4oyip340w3tet \
	I1101 09:19:17.706464  232578 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 
	I1101 09:19:17.706478  232578 cni.go:84] Creating CNI manager for ""
	I1101 09:19:17.706487  232578 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:19:17.707870  232578 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:19:17.708988  232578 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:19:17.714169  232578 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:19:17.714193  232578 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:19:17.729561  232578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:19:14.483150  230484 addons.go:515] duration metric: took 845.199085ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:19:14.742922  230484 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-152344" context rescaled to 1 replicas
	W1101 09:19:16.241153  230484 node_ready.go:57] node "old-k8s-version-152344" has "Ready":"False" status (will retry)
	W1101 09:19:18.740792  230484 node_ready.go:57] node "old-k8s-version-152344" has "Ready":"False" status (will retry)
	I1101 09:19:17.148960  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:19:17.149392  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:19:17.149448  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:19:17.149520  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:19:17.178836  216020 cri.go:89] found id: "d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3"
	I1101 09:19:17.178857  216020 cri.go:89] found id: ""
	I1101 09:19:17.178899  216020 logs.go:282] 1 containers: [d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3]
	I1101 09:19:17.178944  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:17.182925  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:19:17.182994  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:19:17.211662  216020 cri.go:89] found id: ""
	I1101 09:19:17.211688  216020 logs.go:282] 0 containers: []
	W1101 09:19:17.211700  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:19:17.211707  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:19:17.211773  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:19:17.239254  216020 cri.go:89] found id: ""
	I1101 09:19:17.239277  216020 logs.go:282] 0 containers: []
	W1101 09:19:17.239284  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:19:17.239290  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:19:17.239340  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:19:17.268423  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:19:17.268447  216020 cri.go:89] found id: ""
	I1101 09:19:17.268457  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:19:17.268518  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:17.272520  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:19:17.272585  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:19:17.304388  216020 cri.go:89] found id: ""
	I1101 09:19:17.304413  216020 logs.go:282] 0 containers: []
	W1101 09:19:17.304421  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:19:17.304427  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:19:17.304473  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:19:17.331719  216020 cri.go:89] found id: "40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0"
	I1101 09:19:17.331747  216020 cri.go:89] found id: ""
	I1101 09:19:17.331757  216020 logs.go:282] 1 containers: [40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0]
	I1101 09:19:17.331814  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:17.336608  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:19:17.336679  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:19:17.365489  216020 cri.go:89] found id: ""
	I1101 09:19:17.365516  216020 logs.go:282] 0 containers: []
	W1101 09:19:17.365527  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:19:17.365535  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:19:17.365593  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:19:17.393955  216020 cri.go:89] found id: ""
	I1101 09:19:17.393979  216020 logs.go:282] 0 containers: []
	W1101 09:19:17.394056  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:19:17.394072  216020 logs.go:123] Gathering logs for kube-apiserver [d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3] ...
	I1101 09:19:17.394088  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3"
	I1101 09:19:17.427220  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:19:17.427259  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:19:17.470463  216020 logs.go:123] Gathering logs for kube-controller-manager [40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0] ...
	I1101 09:19:17.470493  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0"
	I1101 09:19:17.498960  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:19:17.498987  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:19:17.540452  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:19:17.540485  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:19:17.575541  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:19:17.575579  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:19:17.649824  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:19:17.649882  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:19:17.667603  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:19:17.667638  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:19:17.731590  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:19:17.947322  232578 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:19:17.947385  232578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:17.947509  232578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-397460 minikube.k8s.io/updated_at=2025_11_01T09_19_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=no-preload-397460 minikube.k8s.io/primary=true
	I1101 09:19:17.959625  232578 ops.go:34] apiserver oom_adj: -16
	I1101 09:19:18.028424  232578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:18.529356  232578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:19.028557  232578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:19.528554  232578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:20.029289  232578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:20.529167  232578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:21.028744  232578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:21.528984  232578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:22.029334  232578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:22.528922  232578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:19:22.600973  232578 kubeadm.go:1114] duration metric: took 4.653650749s to wait for elevateKubeSystemPrivileges
	I1101 09:19:22.601078  232578 kubeadm.go:403] duration metric: took 15.53573906s to StartCluster
	I1101 09:19:22.601120  232578 settings.go:142] acquiring lock: {Name:mkb1ba7d0d4bb15f3f0746ce486d72703f901580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:22.601186  232578 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:19:22.602760  232578 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:19:22.603047  232578 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:19:22.603082  232578 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:19:22.603177  232578 addons.go:70] Setting storage-provisioner=true in profile "no-preload-397460"
	I1101 09:19:22.603200  232578 addons.go:239] Setting addon storage-provisioner=true in "no-preload-397460"
	I1101 09:19:22.603205  232578 addons.go:70] Setting default-storageclass=true in profile "no-preload-397460"
	I1101 09:19:22.603226  232578 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-397460"
	I1101 09:19:22.603232  232578 host.go:66] Checking if "no-preload-397460" exists ...
	I1101 09:19:22.603240  232578 config.go:182] Loaded profile config "no-preload-397460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:19:22.603057  232578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:19:22.603739  232578 cli_runner.go:164] Run: docker container inspect no-preload-397460 --format={{.State.Status}}
	I1101 09:19:22.603789  232578 cli_runner.go:164] Run: docker container inspect no-preload-397460 --format={{.State.Status}}
	I1101 09:19:22.604673  232578 out.go:179] * Verifying Kubernetes components...
	I1101 09:19:22.605992  232578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:19:22.627070  232578 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:19:22.627154  232578 addons.go:239] Setting addon default-storageclass=true in "no-preload-397460"
	I1101 09:19:22.627200  232578 host.go:66] Checking if "no-preload-397460" exists ...
	I1101 09:19:22.627649  232578 cli_runner.go:164] Run: docker container inspect no-preload-397460 --format={{.State.Status}}
	I1101 09:19:22.628478  232578 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:19:22.628500  232578 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:19:22.628551  232578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:19:22.657213  232578 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:19:22.657237  232578 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:19:22.657315  232578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:19:22.657338  232578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/no-preload-397460/id_rsa Username:docker}
	I1101 09:19:22.683527  232578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/no-preload-397460/id_rsa Username:docker}
	I1101 09:19:22.694934  232578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:19:22.754297  232578 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:19:22.781083  232578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:19:22.798477  232578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:19:22.878331  232578 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1101 09:19:22.879894  232578 node_ready.go:35] waiting up to 6m0s for node "no-preload-397460" to be "Ready" ...
	I1101 09:19:23.093638  232578 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1101 09:19:20.741343  230484 node_ready.go:57] node "old-k8s-version-152344" has "Ready":"False" status (will retry)
	W1101 09:19:22.742359  230484 node_ready.go:57] node "old-k8s-version-152344" has "Ready":"False" status (will retry)
	I1101 09:19:20.232191  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:19:20.232657  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:19:20.232722  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:19:20.232775  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:19:20.261464  216020 cri.go:89] found id: "d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3"
	I1101 09:19:20.261490  216020 cri.go:89] found id: ""
	I1101 09:19:20.261500  216020 logs.go:282] 1 containers: [d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3]
	I1101 09:19:20.261562  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:20.265622  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:19:20.265704  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:19:20.292716  216020 cri.go:89] found id: ""
	I1101 09:19:20.292750  216020 logs.go:282] 0 containers: []
	W1101 09:19:20.292761  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:19:20.292776  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:19:20.292837  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:19:20.322341  216020 cri.go:89] found id: ""
	I1101 09:19:20.322372  216020 logs.go:282] 0 containers: []
	W1101 09:19:20.322383  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:19:20.322390  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:19:20.322453  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:19:20.350731  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:19:20.350752  216020 cri.go:89] found id: ""
	I1101 09:19:20.350767  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:19:20.350820  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:20.355000  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:19:20.355072  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:19:20.382928  216020 cri.go:89] found id: ""
	I1101 09:19:20.382957  216020 logs.go:282] 0 containers: []
	W1101 09:19:20.382967  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:19:20.382975  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:19:20.383038  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:19:20.412853  216020 cri.go:89] found id: "40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0"
	I1101 09:19:20.412914  216020 cri.go:89] found id: ""
	I1101 09:19:20.412924  216020 logs.go:282] 1 containers: [40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0]
	I1101 09:19:20.412991  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:20.417845  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:19:20.417947  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:19:20.446127  216020 cri.go:89] found id: ""
	I1101 09:19:20.446153  216020 logs.go:282] 0 containers: []
	W1101 09:19:20.446162  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:19:20.446168  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:19:20.446220  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:19:20.474901  216020 cri.go:89] found id: ""
	I1101 09:19:20.474929  216020 logs.go:282] 0 containers: []
	W1101 09:19:20.474940  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:19:20.474952  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:19:20.474966  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:19:20.518810  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:19:20.518844  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:19:20.553002  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:19:20.553032  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:19:20.632007  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:19:20.632038  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:19:20.648516  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:19:20.648545  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:19:20.710231  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:19:20.710259  216020 logs.go:123] Gathering logs for kube-apiserver [d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3] ...
	I1101 09:19:20.710273  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3"
	I1101 09:19:20.744191  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:19:20.744228  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:19:20.789850  216020 logs.go:123] Gathering logs for kube-controller-manager [40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0] ...
	I1101 09:19:20.789893  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0"
	I1101 09:19:23.318995  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:19:23.319478  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:19:23.319526  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:19:23.319572  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:19:23.347467  216020 cri.go:89] found id: "d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3"
	I1101 09:19:23.347487  216020 cri.go:89] found id: ""
	I1101 09:19:23.347496  216020 logs.go:282] 1 containers: [d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3]
	I1101 09:19:23.347556  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:23.352021  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:19:23.352094  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:19:23.381063  216020 cri.go:89] found id: ""
	I1101 09:19:23.381091  216020 logs.go:282] 0 containers: []
	W1101 09:19:23.381101  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:19:23.381109  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:19:23.381165  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:19:23.409848  216020 cri.go:89] found id: ""
	I1101 09:19:23.409883  216020 logs.go:282] 0 containers: []
	W1101 09:19:23.409895  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:19:23.409903  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:19:23.409960  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:19:23.439352  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:19:23.439377  216020 cri.go:89] found id: ""
	I1101 09:19:23.439384  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:19:23.439442  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:23.444631  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:19:23.444689  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:19:23.481729  216020 cri.go:89] found id: ""
	I1101 09:19:23.481760  216020 logs.go:282] 0 containers: []
	W1101 09:19:23.481770  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:19:23.481779  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:19:23.481965  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:19:23.522513  216020 cri.go:89] found id: "40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0"
	I1101 09:19:23.522550  216020 cri.go:89] found id: ""
	I1101 09:19:23.522560  216020 logs.go:282] 1 containers: [40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0]
	I1101 09:19:23.522645  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:23.529203  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:19:23.529278  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:19:23.568904  216020 cri.go:89] found id: ""
	I1101 09:19:23.568933  216020 logs.go:282] 0 containers: []
	W1101 09:19:23.568944  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:19:23.568951  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:19:23.569009  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:19:23.607748  216020 cri.go:89] found id: ""
	I1101 09:19:23.607780  216020 logs.go:282] 0 containers: []
	W1101 09:19:23.607791  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:19:23.607802  216020 logs.go:123] Gathering logs for kube-controller-manager [40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0] ...
	I1101 09:19:23.607820  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0"
	I1101 09:19:23.643147  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:19:23.643177  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:19:23.704216  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:19:23.704301  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:19:23.747328  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:19:23.747362  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:19:23.851346  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:19:23.851384  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:19:23.875624  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:19:23.875664  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:19:23.957183  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:19:23.957210  216020 logs.go:123] Gathering logs for kube-apiserver [d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3] ...
	I1101 09:19:23.957230  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3"
	I1101 09:19:24.002717  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:19:24.002772  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:19:23.094784  232578 addons.go:515] duration metric: took 491.709759ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:19:23.383371  232578 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-397460" context rescaled to 1 replicas
	W1101 09:19:24.882847  232578 node_ready.go:57] node "no-preload-397460" has "Ready":"False" status (will retry)
	W1101 09:19:26.883686  232578 node_ready.go:57] node "no-preload-397460" has "Ready":"False" status (will retry)
	W1101 09:19:25.241501  230484 node_ready.go:57] node "old-k8s-version-152344" has "Ready":"False" status (will retry)
	I1101 09:19:27.241453  230484 node_ready.go:49] node "old-k8s-version-152344" is "Ready"
	I1101 09:19:27.241484  230484 node_ready.go:38] duration metric: took 13.003828202s for node "old-k8s-version-152344" to be "Ready" ...
	I1101 09:19:27.241497  230484 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:19:27.241554  230484 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:19:27.255187  230484 api_server.go:72] duration metric: took 13.617052207s to wait for apiserver process to appear ...
	I1101 09:19:27.255220  230484 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:19:27.255239  230484 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 09:19:27.260404  230484 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1101 09:19:27.261696  230484 api_server.go:141] control plane version: v1.28.0
	I1101 09:19:27.261721  230484 api_server.go:131] duration metric: took 6.494371ms to wait for apiserver health ...
	I1101 09:19:27.261730  230484 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:19:27.266435  230484 system_pods.go:59] 8 kube-system pods found
	I1101 09:19:27.266905  230484 system_pods.go:61] "coredns-5dd5756b68-gcvgr" [5ec9963a-a709-4a14-a266-039c4d3d9ebe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:19:27.266927  230484 system_pods.go:61] "etcd-old-k8s-version-152344" [973f94b6-5289-43a5-a31e-f998c94609af] Running
	I1101 09:19:27.266937  230484 system_pods.go:61] "kindnet-9lbnx" [cdc79b68-caf2-4ddb-afff-7391a2e1402f] Running
	I1101 09:19:27.266943  230484 system_pods.go:61] "kube-apiserver-old-k8s-version-152344" [a39e0ba4-51e7-45c4-b176-5b3801cc2f23] Running
	I1101 09:19:27.266950  230484 system_pods.go:61] "kube-controller-manager-old-k8s-version-152344" [6617a056-e5f0-49b1-8c5c-5d2f293183ab] Running
	I1101 09:19:27.266955  230484 system_pods.go:61] "kube-proxy-w5hpl" [dccd7023-4810-4cc3-9ebd-d7fe6cffce88] Running
	I1101 09:19:27.266960  230484 system_pods.go:61] "kube-scheduler-old-k8s-version-152344" [7cab04d5-174b-4b38-a984-98ebc0ab7983] Running
	I1101 09:19:27.266971  230484 system_pods.go:61] "storage-provisioner" [d5b72a56-2397-4702-8443-4b854af93d01] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:19:27.266983  230484 system_pods.go:74] duration metric: took 5.246522ms to wait for pod list to return data ...
	I1101 09:19:27.266998  230484 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:19:27.269416  230484 default_sa.go:45] found service account: "default"
	I1101 09:19:27.269436  230484 default_sa.go:55] duration metric: took 2.42818ms for default service account to be created ...
	I1101 09:19:27.269444  230484 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:19:27.272462  230484 system_pods.go:86] 8 kube-system pods found
	I1101 09:19:27.272487  230484 system_pods.go:89] "coredns-5dd5756b68-gcvgr" [5ec9963a-a709-4a14-a266-039c4d3d9ebe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:19:27.272493  230484 system_pods.go:89] "etcd-old-k8s-version-152344" [973f94b6-5289-43a5-a31e-f998c94609af] Running
	I1101 09:19:27.272499  230484 system_pods.go:89] "kindnet-9lbnx" [cdc79b68-caf2-4ddb-afff-7391a2e1402f] Running
	I1101 09:19:27.272503  230484 system_pods.go:89] "kube-apiserver-old-k8s-version-152344" [a39e0ba4-51e7-45c4-b176-5b3801cc2f23] Running
	I1101 09:19:27.272507  230484 system_pods.go:89] "kube-controller-manager-old-k8s-version-152344" [6617a056-e5f0-49b1-8c5c-5d2f293183ab] Running
	I1101 09:19:27.272510  230484 system_pods.go:89] "kube-proxy-w5hpl" [dccd7023-4810-4cc3-9ebd-d7fe6cffce88] Running
	I1101 09:19:27.272514  230484 system_pods.go:89] "kube-scheduler-old-k8s-version-152344" [7cab04d5-174b-4b38-a984-98ebc0ab7983] Running
	I1101 09:19:27.272523  230484 system_pods.go:89] "storage-provisioner" [d5b72a56-2397-4702-8443-4b854af93d01] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:19:27.272540  230484 retry.go:31] will retry after 217.694696ms: missing components: kube-dns
	I1101 09:19:27.494650  230484 system_pods.go:86] 8 kube-system pods found
	I1101 09:19:27.494683  230484 system_pods.go:89] "coredns-5dd5756b68-gcvgr" [5ec9963a-a709-4a14-a266-039c4d3d9ebe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:19:27.494689  230484 system_pods.go:89] "etcd-old-k8s-version-152344" [973f94b6-5289-43a5-a31e-f998c94609af] Running
	I1101 09:19:27.494694  230484 system_pods.go:89] "kindnet-9lbnx" [cdc79b68-caf2-4ddb-afff-7391a2e1402f] Running
	I1101 09:19:27.494698  230484 system_pods.go:89] "kube-apiserver-old-k8s-version-152344" [a39e0ba4-51e7-45c4-b176-5b3801cc2f23] Running
	I1101 09:19:27.494702  230484 system_pods.go:89] "kube-controller-manager-old-k8s-version-152344" [6617a056-e5f0-49b1-8c5c-5d2f293183ab] Running
	I1101 09:19:27.494705  230484 system_pods.go:89] "kube-proxy-w5hpl" [dccd7023-4810-4cc3-9ebd-d7fe6cffce88] Running
	I1101 09:19:27.494708  230484 system_pods.go:89] "kube-scheduler-old-k8s-version-152344" [7cab04d5-174b-4b38-a984-98ebc0ab7983] Running
	I1101 09:19:27.494712  230484 system_pods.go:89] "storage-provisioner" [d5b72a56-2397-4702-8443-4b854af93d01] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:19:27.494729  230484 retry.go:31] will retry after 294.558538ms: missing components: kube-dns
	I1101 09:19:27.797645  230484 system_pods.go:86] 8 kube-system pods found
	I1101 09:19:27.797687  230484 system_pods.go:89] "coredns-5dd5756b68-gcvgr" [5ec9963a-a709-4a14-a266-039c4d3d9ebe] Running
	I1101 09:19:27.797696  230484 system_pods.go:89] "etcd-old-k8s-version-152344" [973f94b6-5289-43a5-a31e-f998c94609af] Running
	I1101 09:19:27.797701  230484 system_pods.go:89] "kindnet-9lbnx" [cdc79b68-caf2-4ddb-afff-7391a2e1402f] Running
	I1101 09:19:27.797707  230484 system_pods.go:89] "kube-apiserver-old-k8s-version-152344" [a39e0ba4-51e7-45c4-b176-5b3801cc2f23] Running
	I1101 09:19:27.797713  230484 system_pods.go:89] "kube-controller-manager-old-k8s-version-152344" [6617a056-e5f0-49b1-8c5c-5d2f293183ab] Running
	I1101 09:19:27.797717  230484 system_pods.go:89] "kube-proxy-w5hpl" [dccd7023-4810-4cc3-9ebd-d7fe6cffce88] Running
	I1101 09:19:27.797722  230484 system_pods.go:89] "kube-scheduler-old-k8s-version-152344" [7cab04d5-174b-4b38-a984-98ebc0ab7983] Running
	I1101 09:19:27.797726  230484 system_pods.go:89] "storage-provisioner" [d5b72a56-2397-4702-8443-4b854af93d01] Running
	I1101 09:19:27.797736  230484 system_pods.go:126] duration metric: took 528.285851ms to wait for k8s-apps to be running ...
	I1101 09:19:27.797745  230484 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:19:27.797799  230484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:19:27.820910  230484 system_svc.go:56] duration metric: took 23.154125ms WaitForService to wait for kubelet
	I1101 09:19:27.820955  230484 kubeadm.go:587] duration metric: took 14.182833098s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:19:27.820983  230484 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:19:27.823633  230484 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:19:27.823664  230484 node_conditions.go:123] node cpu capacity is 8
	I1101 09:19:27.823677  230484 node_conditions.go:105] duration metric: took 2.68898ms to run NodePressure ...
	I1101 09:19:27.823687  230484 start.go:242] waiting for startup goroutines ...
	I1101 09:19:27.823693  230484 start.go:247] waiting for cluster config update ...
	I1101 09:19:27.823702  230484 start.go:256] writing updated cluster config ...
	I1101 09:19:27.824000  230484 ssh_runner.go:195] Run: rm -f paused
	I1101 09:19:27.827937  230484 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:19:27.832299  230484 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-gcvgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:27.836741  230484 pod_ready.go:94] pod "coredns-5dd5756b68-gcvgr" is "Ready"
	I1101 09:19:27.836763  230484 pod_ready.go:86] duration metric: took 4.436419ms for pod "coredns-5dd5756b68-gcvgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:27.839293  230484 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-152344" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:27.843933  230484 pod_ready.go:94] pod "etcd-old-k8s-version-152344" is "Ready"
	I1101 09:19:27.843961  230484 pod_ready.go:86] duration metric: took 4.640451ms for pod "etcd-old-k8s-version-152344" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:27.847386  230484 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-152344" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:27.852558  230484 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-152344" is "Ready"
	I1101 09:19:27.852581  230484 pod_ready.go:86] duration metric: took 5.170029ms for pod "kube-apiserver-old-k8s-version-152344" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:27.855451  230484 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-152344" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:28.232931  230484 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-152344" is "Ready"
	I1101 09:19:28.232963  230484 pod_ready.go:86] duration metric: took 377.487682ms for pod "kube-controller-manager-old-k8s-version-152344" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:28.432666  230484 pod_ready.go:83] waiting for pod "kube-proxy-w5hpl" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:28.832919  230484 pod_ready.go:94] pod "kube-proxy-w5hpl" is "Ready"
	I1101 09:19:28.832948  230484 pod_ready.go:86] duration metric: took 400.249165ms for pod "kube-proxy-w5hpl" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:29.032920  230484 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-152344" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:29.432238  230484 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-152344" is "Ready"
	I1101 09:19:29.432267  230484 pod_ready.go:86] duration metric: took 399.317125ms for pod "kube-scheduler-old-k8s-version-152344" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:29.432278  230484 pod_ready.go:40] duration metric: took 1.604313695s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:19:29.477921  230484 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1101 09:19:29.479634  230484 out.go:203] 
	W1101 09:19:29.480898  230484 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 09:19:29.482180  230484 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 09:19:29.483719  230484 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-152344" cluster and "default" namespace by default
	I1101 09:19:26.555501  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:19:26.556141  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:19:26.556193  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:19:26.556258  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:19:26.585970  216020 cri.go:89] found id: "d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3"
	I1101 09:19:26.585989  216020 cri.go:89] found id: ""
	I1101 09:19:26.585997  216020 logs.go:282] 1 containers: [d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3]
	I1101 09:19:26.586045  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:26.590176  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:19:26.590244  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:19:26.618187  216020 cri.go:89] found id: ""
	I1101 09:19:26.618215  216020 logs.go:282] 0 containers: []
	W1101 09:19:26.618223  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:19:26.618233  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:19:26.618281  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:19:26.646734  216020 cri.go:89] found id: ""
	I1101 09:19:26.646762  216020 logs.go:282] 0 containers: []
	W1101 09:19:26.646770  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:19:26.646776  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:19:26.646832  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:19:26.676454  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:19:26.676486  216020 cri.go:89] found id: ""
	I1101 09:19:26.676495  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:19:26.676549  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:26.680782  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:19:26.680851  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:19:26.707382  216020 cri.go:89] found id: ""
	I1101 09:19:26.707409  216020 logs.go:282] 0 containers: []
	W1101 09:19:26.707416  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:19:26.707422  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:19:26.707481  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:19:26.735450  216020 cri.go:89] found id: "40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0"
	I1101 09:19:26.735477  216020 cri.go:89] found id: ""
	I1101 09:19:26.735485  216020 logs.go:282] 1 containers: [40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0]
	I1101 09:19:26.735535  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:26.739594  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:19:26.739667  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:19:26.771640  216020 cri.go:89] found id: ""
	I1101 09:19:26.771670  216020 logs.go:282] 0 containers: []
	W1101 09:19:26.771680  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:19:26.771688  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:19:26.771771  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:19:26.803098  216020 cri.go:89] found id: ""
	I1101 09:19:26.803123  216020 logs.go:282] 0 containers: []
	W1101 09:19:26.803133  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:19:26.803144  216020 logs.go:123] Gathering logs for kube-controller-manager [40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0] ...
	I1101 09:19:26.803159  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0"
	I1101 09:19:26.831499  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:19:26.831524  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:19:26.872461  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:19:26.872500  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:19:26.903665  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:19:26.903691  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:19:26.977305  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:19:26.977346  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:19:26.993146  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:19:26.993174  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:19:27.053816  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:19:27.053837  216020 logs.go:123] Gathering logs for kube-apiserver [d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3] ...
	I1101 09:19:27.053849  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3"
	I1101 09:19:27.087084  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:19:27.087124  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:19:29.645416  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:19:29.645801  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:19:29.645856  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:19:29.645932  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:19:29.676096  216020 cri.go:89] found id: "d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3"
	I1101 09:19:29.676121  216020 cri.go:89] found id: ""
	I1101 09:19:29.676132  216020 logs.go:282] 1 containers: [d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3]
	I1101 09:19:29.676184  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:29.680526  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:19:29.680594  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:19:29.712098  216020 cri.go:89] found id: ""
	I1101 09:19:29.712125  216020 logs.go:282] 0 containers: []
	W1101 09:19:29.712134  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:19:29.712139  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:19:29.712183  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:19:29.742683  216020 cri.go:89] found id: ""
	I1101 09:19:29.742710  216020 logs.go:282] 0 containers: []
	W1101 09:19:29.742722  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:19:29.742730  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:19:29.742795  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:19:29.771484  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:19:29.771504  216020 cri.go:89] found id: ""
	I1101 09:19:29.771511  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:19:29.771567  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:29.775918  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:19:29.775983  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:19:29.803530  216020 cri.go:89] found id: ""
	I1101 09:19:29.803551  216020 logs.go:282] 0 containers: []
	W1101 09:19:29.803568  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:19:29.803573  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:19:29.803619  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:19:29.832079  216020 cri.go:89] found id: "e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677"
	I1101 09:19:29.832099  216020 cri.go:89] found id: "40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0"
	I1101 09:19:29.832103  216020 cri.go:89] found id: ""
	I1101 09:19:29.832109  216020 logs.go:282] 2 containers: [e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677 40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0]
	I1101 09:19:29.832167  216020 ssh_runner.go:195] Run: which crictl
	W1101 09:19:29.383068  232578 node_ready.go:57] node "no-preload-397460" has "Ready":"False" status (will retry)
	W1101 09:19:31.383683  232578 node_ready.go:57] node "no-preload-397460" has "Ready":"False" status (will retry)
	I1101 09:19:29.836799  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:19:29.840642  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:19:29.840699  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:19:29.869504  216020 cri.go:89] found id: ""
	I1101 09:19:29.869526  216020 logs.go:282] 0 containers: []
	W1101 09:19:29.869533  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:19:29.869539  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:19:29.869586  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:19:29.898030  216020 cri.go:89] found id: ""
	I1101 09:19:29.898056  216020 logs.go:282] 0 containers: []
	W1101 09:19:29.898064  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:19:29.898077  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:19:29.898089  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:19:29.940625  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:19:29.940661  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:19:29.958704  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:19:29.958739  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:19:29.993620  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:19:29.993644  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:19:30.061349  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:19:30.061382  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:19:30.118839  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:19:30.118858  216020 logs.go:123] Gathering logs for kube-apiserver [d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3] ...
	I1101 09:19:30.118903  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d2fc2ccdf9333f0fc4490b5a07f6246fd25294653d0b29a2e8b27d471ce65ce3"
	I1101 09:19:30.151945  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:19:30.151977  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:19:30.199337  216020 logs.go:123] Gathering logs for kube-controller-manager [e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677] ...
	I1101 09:19:30.199372  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677"
	I1101 09:19:30.227021  216020 logs.go:123] Gathering logs for kube-controller-manager [40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0] ...
	I1101 09:19:30.227052  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40fb6827ef020faf01b83649662bbb9dd9f1e77a7e4e9f618812cf6b94b44fe0"
	I1101 09:19:32.755426  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	W1101 09:19:33.883785  232578 node_ready.go:57] node "no-preload-397460" has "Ready":"False" status (will retry)
	I1101 09:19:35.883179  232578 node_ready.go:49] node "no-preload-397460" is "Ready"
	I1101 09:19:35.883207  232578 node_ready.go:38] duration metric: took 13.003265928s for node "no-preload-397460" to be "Ready" ...
	I1101 09:19:35.883220  232578 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:19:35.883277  232578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:19:35.895919  232578 api_server.go:72] duration metric: took 13.292834884s to wait for apiserver process to appear ...
	I1101 09:19:35.895947  232578 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:19:35.895978  232578 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 09:19:35.901230  232578 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1101 09:19:35.902248  232578 api_server.go:141] control plane version: v1.34.1
	I1101 09:19:35.902274  232578 api_server.go:131] duration metric: took 6.320768ms to wait for apiserver health ...
	I1101 09:19:35.902283  232578 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:19:35.905830  232578 system_pods.go:59] 8 kube-system pods found
	I1101 09:19:35.905908  232578 system_pods.go:61] "coredns-66bc5c9577-z5578" [5bbebf5b-a427-4501-881c-fc445ff4054c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:19:35.905916  232578 system_pods.go:61] "etcd-no-preload-397460" [3aa978e5-6af4-4e61-8352-dfa542467d98] Running
	I1101 09:19:35.905927  232578 system_pods.go:61] "kindnet-lddf5" [85b09376-b18b-444d-8405-a7045c3732dc] Running
	I1101 09:19:35.905931  232578 system_pods.go:61] "kube-apiserver-no-preload-397460" [d14cbd4e-ca20-4299-b00f-56273156c4c1] Running
	I1101 09:19:35.905935  232578 system_pods.go:61] "kube-controller-manager-no-preload-397460" [76ef0890-2a18-4ddd-8196-5a505773f7f0] Running
	I1101 09:19:35.905942  232578 system_pods.go:61] "kube-proxy-5kpft" [788827b1-dfc6-4921-a791-13a752d335aa] Running
	I1101 09:19:35.905945  232578 system_pods.go:61] "kube-scheduler-no-preload-397460" [e06dfb76-9322-497e-a36a-2320f2103cac] Running
	I1101 09:19:35.905950  232578 system_pods.go:61] "storage-provisioner" [8c7273a1-68fa-4783-948d-41e29d4fc406] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:19:35.905958  232578 system_pods.go:74] duration metric: took 3.669278ms to wait for pod list to return data ...
	I1101 09:19:35.905970  232578 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:19:35.908264  232578 default_sa.go:45] found service account: "default"
	I1101 09:19:35.908284  232578 default_sa.go:55] duration metric: took 2.309364ms for default service account to be created ...
	I1101 09:19:35.908293  232578 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:19:35.911150  232578 system_pods.go:86] 8 kube-system pods found
	I1101 09:19:35.911182  232578 system_pods.go:89] "coredns-66bc5c9577-z5578" [5bbebf5b-a427-4501-881c-fc445ff4054c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:19:35.911188  232578 system_pods.go:89] "etcd-no-preload-397460" [3aa978e5-6af4-4e61-8352-dfa542467d98] Running
	I1101 09:19:35.911195  232578 system_pods.go:89] "kindnet-lddf5" [85b09376-b18b-444d-8405-a7045c3732dc] Running
	I1101 09:19:35.911198  232578 system_pods.go:89] "kube-apiserver-no-preload-397460" [d14cbd4e-ca20-4299-b00f-56273156c4c1] Running
	I1101 09:19:35.911202  232578 system_pods.go:89] "kube-controller-manager-no-preload-397460" [76ef0890-2a18-4ddd-8196-5a505773f7f0] Running
	I1101 09:19:35.911205  232578 system_pods.go:89] "kube-proxy-5kpft" [788827b1-dfc6-4921-a791-13a752d335aa] Running
	I1101 09:19:35.911211  232578 system_pods.go:89] "kube-scheduler-no-preload-397460" [e06dfb76-9322-497e-a36a-2320f2103cac] Running
	I1101 09:19:35.911215  232578 system_pods.go:89] "storage-provisioner" [8c7273a1-68fa-4783-948d-41e29d4fc406] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:19:35.911246  232578 retry.go:31] will retry after 216.527236ms: missing components: kube-dns
	I1101 09:19:36.132540  232578 system_pods.go:86] 8 kube-system pods found
	I1101 09:19:36.132569  232578 system_pods.go:89] "coredns-66bc5c9577-z5578" [5bbebf5b-a427-4501-881c-fc445ff4054c] Running
	I1101 09:19:36.132575  232578 system_pods.go:89] "etcd-no-preload-397460" [3aa978e5-6af4-4e61-8352-dfa542467d98] Running
	I1101 09:19:36.132579  232578 system_pods.go:89] "kindnet-lddf5" [85b09376-b18b-444d-8405-a7045c3732dc] Running
	I1101 09:19:36.132583  232578 system_pods.go:89] "kube-apiserver-no-preload-397460" [d14cbd4e-ca20-4299-b00f-56273156c4c1] Running
	I1101 09:19:36.132586  232578 system_pods.go:89] "kube-controller-manager-no-preload-397460" [76ef0890-2a18-4ddd-8196-5a505773f7f0] Running
	I1101 09:19:36.132589  232578 system_pods.go:89] "kube-proxy-5kpft" [788827b1-dfc6-4921-a791-13a752d335aa] Running
	I1101 09:19:36.132592  232578 system_pods.go:89] "kube-scheduler-no-preload-397460" [e06dfb76-9322-497e-a36a-2320f2103cac] Running
	I1101 09:19:36.132595  232578 system_pods.go:89] "storage-provisioner" [8c7273a1-68fa-4783-948d-41e29d4fc406] Running
	I1101 09:19:36.132603  232578 system_pods.go:126] duration metric: took 224.304042ms to wait for k8s-apps to be running ...
	I1101 09:19:36.132610  232578 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:19:36.132671  232578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:19:36.146234  232578 system_svc.go:56] duration metric: took 13.611787ms WaitForService to wait for kubelet
	I1101 09:19:36.146268  232578 kubeadm.go:587] duration metric: took 13.543188984s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:19:36.146320  232578 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:19:36.149319  232578 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:19:36.149352  232578 node_conditions.go:123] node cpu capacity is 8
	I1101 09:19:36.149366  232578 node_conditions.go:105] duration metric: took 3.036522ms to run NodePressure ...
	I1101 09:19:36.149377  232578 start.go:242] waiting for startup goroutines ...
	I1101 09:19:36.149383  232578 start.go:247] waiting for cluster config update ...
	I1101 09:19:36.149392  232578 start.go:256] writing updated cluster config ...
	I1101 09:19:36.149717  232578 ssh_runner.go:195] Run: rm -f paused
	I1101 09:19:36.154217  232578 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:19:36.158075  232578 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z5578" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:36.162352  232578 pod_ready.go:94] pod "coredns-66bc5c9577-z5578" is "Ready"
	I1101 09:19:36.162372  232578 pod_ready.go:86] duration metric: took 4.274864ms for pod "coredns-66bc5c9577-z5578" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:36.164299  232578 pod_ready.go:83] waiting for pod "etcd-no-preload-397460" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:36.168049  232578 pod_ready.go:94] pod "etcd-no-preload-397460" is "Ready"
	I1101 09:19:36.168069  232578 pod_ready.go:86] duration metric: took 3.751742ms for pod "etcd-no-preload-397460" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:36.169818  232578 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-397460" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:36.173374  232578 pod_ready.go:94] pod "kube-apiserver-no-preload-397460" is "Ready"
	I1101 09:19:36.173395  232578 pod_ready.go:86] duration metric: took 3.558027ms for pod "kube-apiserver-no-preload-397460" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:36.175317  232578 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-397460" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:36.558378  232578 pod_ready.go:94] pod "kube-controller-manager-no-preload-397460" is "Ready"
	I1101 09:19:36.558406  232578 pod_ready.go:86] duration metric: took 383.072258ms for pod "kube-controller-manager-no-preload-397460" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:36.758615  232578 pod_ready.go:83] waiting for pod "kube-proxy-5kpft" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:37.159032  232578 pod_ready.go:94] pod "kube-proxy-5kpft" is "Ready"
	I1101 09:19:37.159059  232578 pod_ready.go:86] duration metric: took 400.417287ms for pod "kube-proxy-5kpft" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:37.359536  232578 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-397460" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:37.759111  232578 pod_ready.go:94] pod "kube-scheduler-no-preload-397460" is "Ready"
	I1101 09:19:37.759138  232578 pod_ready.go:86] duration metric: took 399.576874ms for pod "kube-scheduler-no-preload-397460" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:19:37.759153  232578 pod_ready.go:40] duration metric: took 1.604899725s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:19:37.811243  232578 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:19:37.813549  232578 out.go:179] * Done! kubectl is now configured to use "no-preload-397460" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:19:27 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:27.135224349Z" level=info msg="Starting container: 14a3f180347c69bf6ac6efe49813788fdc5da5cb3156ad63508388c6d07a9a04" id=d469c80b-d31e-4370-bbd1-12ca2d09dadd name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:19:27 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:27.137259257Z" level=info msg="Started container" PID=2142 containerID=14a3f180347c69bf6ac6efe49813788fdc5da5cb3156ad63508388c6d07a9a04 description=kube-system/coredns-5dd5756b68-gcvgr/coredns id=d469c80b-d31e-4370-bbd1-12ca2d09dadd name=/runtime.v1.RuntimeService/StartContainer sandboxID=de793e99da00419647e42557b58d96d15821df16942fb40f050cdf594e0e72c6
	Nov 01 09:19:29 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:29.942132609Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9c67324a-f8a1-4cd7-98d8-21766abae37f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:19:29 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:29.942252482Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:19:29 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:29.948001121Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f7aa0f81249b798c61b79787f1b2cebd2c0cb2517e61818432522ce7e026f7fc UID:9fab4cbc-fb02-4d8c-a42d-3898aed47002 NetNS:/var/run/netns/b2905f7c-a6cb-4340-89ae-7c1fbd58da69 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006425c0}] Aliases:map[]}"
	Nov 01 09:19:29 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:29.948037643Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:19:29 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:29.958302073Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f7aa0f81249b798c61b79787f1b2cebd2c0cb2517e61818432522ce7e026f7fc UID:9fab4cbc-fb02-4d8c-a42d-3898aed47002 NetNS:/var/run/netns/b2905f7c-a6cb-4340-89ae-7c1fbd58da69 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006425c0}] Aliases:map[]}"
	Nov 01 09:19:29 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:29.958481587Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 09:19:29 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:29.959468191Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:19:29 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:29.960538006Z" level=info msg="Ran pod sandbox f7aa0f81249b798c61b79787f1b2cebd2c0cb2517e61818432522ce7e026f7fc with infra container: default/busybox/POD" id=9c67324a-f8a1-4cd7-98d8-21766abae37f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:19:29 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:29.961984594Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0f997e95-8162-4f7c-a206-19b16627d794 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:19:29 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:29.962123166Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0f997e95-8162-4f7c-a206-19b16627d794 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:19:29 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:29.962157782Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0f997e95-8162-4f7c-a206-19b16627d794 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:19:29 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:29.962678064Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=16c073c1-bce3-467e-ba96-6a8e678aaa2b name=/runtime.v1.ImageService/PullImage
	Nov 01 09:19:29 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:29.964043335Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 09:19:30 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:30.711590413Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=16c073c1-bce3-467e-ba96-6a8e678aaa2b name=/runtime.v1.ImageService/PullImage
	Nov 01 09:19:30 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:30.712437218Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d6128fc7-36be-4972-8191-91a89c204ab7 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:19:30 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:30.714065788Z" level=info msg="Creating container: default/busybox/busybox" id=01364333-702c-4be5-b7ae-2323722b0b96 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:19:30 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:30.714201618Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:19:30 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:30.718048896Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:19:30 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:30.718612991Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:19:30 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:30.752550589Z" level=info msg="Created container d940dfb2b4d3631e714362b88469f0c553303a59811382d613ccbcdae6b7f69e: default/busybox/busybox" id=01364333-702c-4be5-b7ae-2323722b0b96 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:19:30 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:30.753203951Z" level=info msg="Starting container: d940dfb2b4d3631e714362b88469f0c553303a59811382d613ccbcdae6b7f69e" id=dbe480dc-c1e4-429d-abad-d48ee7cbe222 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:19:30 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:30.755502553Z" level=info msg="Started container" PID=2221 containerID=d940dfb2b4d3631e714362b88469f0c553303a59811382d613ccbcdae6b7f69e description=default/busybox/busybox id=dbe480dc-c1e4-429d-abad-d48ee7cbe222 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7aa0f81249b798c61b79787f1b2cebd2c0cb2517e61818432522ce7e026f7fc
	Nov 01 09:19:37 old-k8s-version-152344 crio[776]: time="2025-11-01T09:19:37.719016616Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	d940dfb2b4d36       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   f7aa0f81249b7       busybox                                          default
	14a3f180347c6       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   de793e99da004       coredns-5dd5756b68-gcvgr                         kube-system
	3171eb91502f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   c98b7a6c9a9e2       storage-provisioner                              kube-system
	a12c56b358f17       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   cd368b9153a02       kindnet-9lbnx                                    kube-system
	4b8cb5ae5d1f1       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      25 seconds ago      Running             kube-proxy                0                   3965ddc5c4f2a       kube-proxy-w5hpl                                 kube-system
	3c4b9272e9335       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      43 seconds ago      Running             etcd                      0                   b2c2b941d7fbb       etcd-old-k8s-version-152344                      kube-system
	8f31bf46697ee       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      43 seconds ago      Running             kube-scheduler            0                   e6be9d48a23c6       kube-scheduler-old-k8s-version-152344            kube-system
	ebe8e9d310708       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      43 seconds ago      Running             kube-apiserver            0                   f4276dd4d3e5f       kube-apiserver-old-k8s-version-152344            kube-system
	fc31241523cb3       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      43 seconds ago      Running             kube-controller-manager   0                   3ee6fef707dd3       kube-controller-manager-old-k8s-version-152344   kube-system
	
	
	==> coredns [14a3f180347c69bf6ac6efe49813788fdc5da5cb3156ad63508388c6d07a9a04] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34067 - 37870 "HINFO IN 3933812005234694741.9070239149324690186. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.08068596s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-152344
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-152344
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=old-k8s-version-152344
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_19_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:18:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-152344
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:19:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:19:31 +0000   Sat, 01 Nov 2025 09:18:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:19:31 +0000   Sat, 01 Nov 2025 09:18:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:19:31 +0000   Sat, 01 Nov 2025 09:18:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:19:31 +0000   Sat, 01 Nov 2025 09:19:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-152344
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2997294f-a5eb-4a19-8c2c-94960c03c89f
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-gcvgr                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-old-k8s-version-152344                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-9lbnx                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-old-k8s-version-152344             250m (3%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-152344    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-w5hpl                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-old-k8s-version-152344             100m (1%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  44s (x9 over 45s)  kubelet          Node old-k8s-version-152344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x8 over 45s)  kubelet          Node old-k8s-version-152344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x7 over 45s)  kubelet          Node old-k8s-version-152344 status is now: NodeHasSufficientPID
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet          Node old-k8s-version-152344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet          Node old-k8s-version-152344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet          Node old-k8s-version-152344 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node old-k8s-version-152344 event: Registered Node old-k8s-version-152344 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-152344 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.047726] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[Nov 1 08:38] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.215674] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.047939] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000023] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023915] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023867] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023880] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000033] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023884] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +4.031524] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +8.255123] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	
	
	==> etcd [3c4b9272e9335eaeb3bda69470b945a5516d5ef241ce9f1cb5e4198d83d4c259] <==
	{"level":"info","ts":"2025-11-01T09:18:56.26773Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:18:56.267803Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:18:56.26867Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T09:18:56.2687Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-01T09:18:56.268779Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:18:56.269159Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-11-01T09:18:56.269441Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:18:56.269536Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:18:56.269569Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:18:56.269712Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T09:19:13.288854Z","caller":"traceutil/trace.go:171","msg":"trace[432184972] linearizableReadLoop","detail":"{readStateIndex:348; appliedIndex:347; }","duration":"108.212567ms","start":"2025-11-01T09:19:13.18062Z","end":"2025-11-01T09:19:13.288833Z","steps":["trace[432184972] 'read index received'  (duration: 52.791546ms)","trace[432184972] 'applied index is now lower than readState.Index'  (duration: 55.420176ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:19:13.288912Z","caller":"traceutil/trace.go:171","msg":"trace[444723925] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"146.152423ms","start":"2025-11-01T09:19:13.142747Z","end":"2025-11-01T09:19:13.2889Z","steps":["trace[444723925] 'process raft request'  (duration: 90.701739ms)","trace[444723925] 'compare'  (duration: 55.235758ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:19:13.289058Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.414789ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-01T09:19:13.290587Z","caller":"traceutil/trace.go:171","msg":"trace[67635830] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:336; }","duration":"109.96986ms","start":"2025-11-01T09:19:13.180583Z","end":"2025-11-01T09:19:13.290553Z","steps":["trace[67635830] 'agreement among raft nodes before linearized reading'  (duration: 108.34899ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:19:13.483238Z","caller":"traceutil/trace.go:171","msg":"trace[74214139] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"124.885692ms","start":"2025-11-01T09:19:13.358336Z","end":"2025-11-01T09:19:13.483222Z","steps":["trace[74214139] 'process raft request'  (duration: 124.842989ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:19:13.483274Z","caller":"traceutil/trace.go:171","msg":"trace[1228853922] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"185.474868ms","start":"2025-11-01T09:19:13.297746Z","end":"2025-11-01T09:19:13.483221Z","steps":["trace[1228853922] 'process raft request'  (duration: 119.633413ms)","trace[1228853922] 'compare'  (duration: 65.680543ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:19:13.483458Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.649327ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-11-01T09:19:13.483507Z","caller":"traceutil/trace.go:171","msg":"trace[1377076131] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:339; }","duration":"153.712756ms","start":"2025-11-01T09:19:13.329783Z","end":"2025-11-01T09:19:13.483496Z","steps":["trace[1377076131] 'agreement among raft nodes before linearized reading'  (duration: 153.531058ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:19:13.483283Z","caller":"traceutil/trace.go:171","msg":"trace[1932336530] linearizableReadLoop","detail":"{readStateIndex:350; appliedIndex:349; }","duration":"153.409477ms","start":"2025-11-01T09:19:13.329824Z","end":"2025-11-01T09:19:13.483233Z","steps":["trace[1932336530] 'read index received'  (duration: 24.96847ms)","trace[1932336530] 'applied index is now lower than readState.Index'  (duration: 128.437427ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:19:13.483648Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.372535ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-01T09:19:13.48369Z","caller":"traceutil/trace.go:171","msg":"trace[995032343] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/daemon-set-controller; range_end:; response_count:1; response_revision:339; }","duration":"103.423649ms","start":"2025-11-01T09:19:13.380259Z","end":"2025-11-01T09:19:13.483683Z","steps":["trace[995032343] 'agreement among raft nodes before linearized reading'  (duration: 103.335219ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:19:13.623288Z","caller":"traceutil/trace.go:171","msg":"trace[1005408913] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"130.286512ms","start":"2025-11-01T09:19:13.492983Z","end":"2025-11-01T09:19:13.623269Z","steps":["trace[1005408913] 'process raft request'  (duration: 125.996762ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:19:13.62372Z","caller":"traceutil/trace.go:171","msg":"trace[785518052] transaction","detail":"{read_only:false; response_revision:342; number_of_response:1; }","duration":"130.394197ms","start":"2025-11-01T09:19:13.493298Z","end":"2025-11-01T09:19:13.623692Z","steps":["trace[785518052] 'process raft request'  (duration: 129.892736ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:19:13.625626Z","caller":"traceutil/trace.go:171","msg":"trace[852906879] transaction","detail":"{read_only:false; response_revision:343; number_of_response:1; }","duration":"131.288529ms","start":"2025-11-01T09:19:13.493385Z","end":"2025-11-01T09:19:13.624674Z","steps":["trace[852906879] 'process raft request'  (duration: 130.199212ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:19:13.626167Z","caller":"traceutil/trace.go:171","msg":"trace[2118115031] transaction","detail":"{read_only:false; response_revision:344; number_of_response:1; }","duration":"132.160071ms","start":"2025-11-01T09:19:13.493992Z","end":"2025-11-01T09:19:13.626152Z","steps":["trace[2118115031] 'process raft request'  (duration: 131.9291ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:19:39 up  1:02,  0 user,  load average: 3.32, 2.33, 1.39
	Linux old-k8s-version-152344 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a12c56b358f171a823706d470a2c93b44de5c821405e604cffd60e49a660e713] <==
	I1101 09:19:16.125915       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:19:16.126209       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1101 09:19:16.126368       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:19:16.126385       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:19:16.126405       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:19:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:19:16.422888       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:19:16.422973       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:19:16.422988       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:19:16.423563       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:19:16.623323       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:19:16.623355       1 metrics.go:72] Registering metrics
	I1101 09:19:16.623407       1 controller.go:711] "Syncing nftables rules"
	I1101 09:19:26.423419       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:19:26.423493       1 main.go:301] handling current node
	I1101 09:19:36.423361       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:19:36.423391       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ebe8e9d310708efb3e6a9dcb20b60720be3f7be9f8f62ed3236783134f77bf6e] <==
	I1101 09:18:57.754681       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 09:18:57.755138       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:18:57.755501       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 09:18:57.755634       1 aggregator.go:166] initial CRD sync complete...
	I1101 09:18:57.755660       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 09:18:57.755667       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:18:57.755674       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:18:57.756849       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 09:18:57.782081       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 09:18:57.943921       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:18:58.658725       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:18:58.662170       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:18:58.662192       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:18:59.087170       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:18:59.126346       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:18:59.268918       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:18:59.275116       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1101 09:18:59.276235       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 09:18:59.280689       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:18:59.707589       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 09:19:00.469034       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 09:19:00.482338       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:19:00.492449       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1101 09:19:13.636880       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 09:19:13.706013       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [fc31241523cb31d0f7dfc56b975767dcc2397b6220a86068b8d67ba77507e79f] <==
	I1101 09:19:12.982758       1 shared_informer.go:318] Caches are synced for daemon sets
	I1101 09:19:12.983953       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1101 09:19:12.987368       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 09:19:12.988585       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 09:19:13.314974       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 09:19:13.332557       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 09:19:13.332597       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 09:19:13.650577       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1101 09:19:13.739417       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-w5hpl"
	I1101 09:19:13.771675       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9lbnx"
	I1101 09:19:13.813098       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-d5x4f"
	I1101 09:19:13.830783       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-gcvgr"
	I1101 09:19:13.844759       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="197.219692ms"
	I1101 09:19:13.854095       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.174772ms"
	I1101 09:19:13.854252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="109.547µs"
	I1101 09:19:14.286374       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1101 09:19:14.305503       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-d5x4f"
	I1101 09:19:14.318649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="34.766195ms"
	I1101 09:19:14.328211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.805821ms"
	I1101 09:19:14.328391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="127.252µs"
	I1101 09:19:26.777181       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="135.474µs"
	I1101 09:19:26.788633       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="126.812µs"
	I1101 09:19:27.654757       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.768002ms"
	I1101 09:19:27.654907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.8µs"
	I1101 09:19:27.899600       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [4b8cb5ae5d1f154872bf3400cf11a28a09eed60f76ba95b7cbde390cfe5547fd] <==
	I1101 09:19:14.297158       1 server_others.go:69] "Using iptables proxy"
	I1101 09:19:14.315996       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1101 09:19:14.350964       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:19:14.354575       1 server_others.go:152] "Using iptables Proxier"
	I1101 09:19:14.354642       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 09:19:14.354653       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 09:19:14.354700       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 09:19:14.355005       1 server.go:846] "Version info" version="v1.28.0"
	I1101 09:19:14.355073       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:19:14.355813       1 config.go:97] "Starting endpoint slice config controller"
	I1101 09:19:14.355851       1 config.go:315] "Starting node config controller"
	I1101 09:19:14.355986       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 09:19:14.355962       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 09:19:14.355902       1 config.go:188] "Starting service config controller"
	I1101 09:19:14.356027       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 09:19:14.456124       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 09:19:14.456128       1 shared_informer.go:318] Caches are synced for service config
	I1101 09:19:14.456129       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [8f31bf46697ee92f77423c1a95897c9413abdac3fbac3ea50232677a41edad83] <==
	E1101 09:18:57.723944       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 09:18:57.723983       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 09:18:57.724240       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 09:18:57.724310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1101 09:18:57.724528       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1101 09:18:57.724593       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1101 09:18:57.724824       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1101 09:18:57.724925       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1101 09:18:58.590294       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 09:18:58.590333       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1101 09:18:58.593952       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 09:18:58.593993       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1101 09:18:58.608554       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1101 09:18:58.608593       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1101 09:18:58.634592       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1101 09:18:58.634629       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1101 09:18:58.731999       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 09:18:58.732037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1101 09:18:58.752616       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 09:18:58.752661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 09:18:58.912612       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 09:18:58.912658       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:18:58.914756       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1101 09:18:58.914784       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1101 09:19:01.613243       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 09:19:12 old-k8s-version-152344 kubelet[1385]: I1101 09:19:12.813523    1385 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 09:19:12 old-k8s-version-152344 kubelet[1385]: I1101 09:19:12.814277    1385 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 09:19:13 old-k8s-version-152344 kubelet[1385]: I1101 09:19:13.764418    1385 topology_manager.go:215] "Topology Admit Handler" podUID="dccd7023-4810-4cc3-9ebd-d7fe6cffce88" podNamespace="kube-system" podName="kube-proxy-w5hpl"
	Nov 01 09:19:13 old-k8s-version-152344 kubelet[1385]: I1101 09:19:13.774573    1385 topology_manager.go:215] "Topology Admit Handler" podUID="cdc79b68-caf2-4ddb-afff-7391a2e1402f" podNamespace="kube-system" podName="kindnet-9lbnx"
	Nov 01 09:19:13 old-k8s-version-152344 kubelet[1385]: I1101 09:19:13.910622    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dccd7023-4810-4cc3-9ebd-d7fe6cffce88-kube-proxy\") pod \"kube-proxy-w5hpl\" (UID: \"dccd7023-4810-4cc3-9ebd-d7fe6cffce88\") " pod="kube-system/kube-proxy-w5hpl"
	Nov 01 09:19:13 old-k8s-version-152344 kubelet[1385]: I1101 09:19:13.910687    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dccd7023-4810-4cc3-9ebd-d7fe6cffce88-xtables-lock\") pod \"kube-proxy-w5hpl\" (UID: \"dccd7023-4810-4cc3-9ebd-d7fe6cffce88\") " pod="kube-system/kube-proxy-w5hpl"
	Nov 01 09:19:13 old-k8s-version-152344 kubelet[1385]: I1101 09:19:13.910719    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtkjd\" (UniqueName: \"kubernetes.io/projected/cdc79b68-caf2-4ddb-afff-7391a2e1402f-kube-api-access-rtkjd\") pod \"kindnet-9lbnx\" (UID: \"cdc79b68-caf2-4ddb-afff-7391a2e1402f\") " pod="kube-system/kindnet-9lbnx"
	Nov 01 09:19:13 old-k8s-version-152344 kubelet[1385]: I1101 09:19:13.910750    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/cdc79b68-caf2-4ddb-afff-7391a2e1402f-cni-cfg\") pod \"kindnet-9lbnx\" (UID: \"cdc79b68-caf2-4ddb-afff-7391a2e1402f\") " pod="kube-system/kindnet-9lbnx"
	Nov 01 09:19:13 old-k8s-version-152344 kubelet[1385]: I1101 09:19:13.910797    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdc79b68-caf2-4ddb-afff-7391a2e1402f-xtables-lock\") pod \"kindnet-9lbnx\" (UID: \"cdc79b68-caf2-4ddb-afff-7391a2e1402f\") " pod="kube-system/kindnet-9lbnx"
	Nov 01 09:19:13 old-k8s-version-152344 kubelet[1385]: I1101 09:19:13.910824    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdc79b68-caf2-4ddb-afff-7391a2e1402f-lib-modules\") pod \"kindnet-9lbnx\" (UID: \"cdc79b68-caf2-4ddb-afff-7391a2e1402f\") " pod="kube-system/kindnet-9lbnx"
	Nov 01 09:19:13 old-k8s-version-152344 kubelet[1385]: I1101 09:19:13.910852    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dccd7023-4810-4cc3-9ebd-d7fe6cffce88-lib-modules\") pod \"kube-proxy-w5hpl\" (UID: \"dccd7023-4810-4cc3-9ebd-d7fe6cffce88\") " pod="kube-system/kube-proxy-w5hpl"
	Nov 01 09:19:13 old-k8s-version-152344 kubelet[1385]: I1101 09:19:13.911366    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qscd9\" (UniqueName: \"kubernetes.io/projected/dccd7023-4810-4cc3-9ebd-d7fe6cffce88-kube-api-access-qscd9\") pod \"kube-proxy-w5hpl\" (UID: \"dccd7023-4810-4cc3-9ebd-d7fe6cffce88\") " pod="kube-system/kube-proxy-w5hpl"
	Nov 01 09:19:14 old-k8s-version-152344 kubelet[1385]: I1101 09:19:14.603111    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-w5hpl" podStartSLOduration=1.603056512 podCreationTimestamp="2025-11-01 09:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:19:14.602859166 +0000 UTC m=+14.165825472" watchObservedRunningTime="2025-11-01 09:19:14.603056512 +0000 UTC m=+14.166022819"
	Nov 01 09:19:16 old-k8s-version-152344 kubelet[1385]: I1101 09:19:16.606617    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-9lbnx" podStartSLOduration=1.7604413270000001 podCreationTimestamp="2025-11-01 09:19:13 +0000 UTC" firstStartedPulling="2025-11-01 09:19:14.093493492 +0000 UTC m=+13.656459791" lastFinishedPulling="2025-11-01 09:19:15.939626427 +0000 UTC m=+15.502592719" observedRunningTime="2025-11-01 09:19:16.606429595 +0000 UTC m=+16.169395904" watchObservedRunningTime="2025-11-01 09:19:16.606574255 +0000 UTC m=+16.169540563"
	Nov 01 09:19:26 old-k8s-version-152344 kubelet[1385]: I1101 09:19:26.750017    1385 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 01 09:19:26 old-k8s-version-152344 kubelet[1385]: I1101 09:19:26.777317    1385 topology_manager.go:215] "Topology Admit Handler" podUID="5ec9963a-a709-4a14-a266-039c4d3d9ebe" podNamespace="kube-system" podName="coredns-5dd5756b68-gcvgr"
	Nov 01 09:19:26 old-k8s-version-152344 kubelet[1385]: I1101 09:19:26.778956    1385 topology_manager.go:215] "Topology Admit Handler" podUID="d5b72a56-2397-4702-8443-4b854af93d01" podNamespace="kube-system" podName="storage-provisioner"
	Nov 01 09:19:26 old-k8s-version-152344 kubelet[1385]: I1101 09:19:26.899220    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ec9963a-a709-4a14-a266-039c4d3d9ebe-config-volume\") pod \"coredns-5dd5756b68-gcvgr\" (UID: \"5ec9963a-a709-4a14-a266-039c4d3d9ebe\") " pod="kube-system/coredns-5dd5756b68-gcvgr"
	Nov 01 09:19:26 old-k8s-version-152344 kubelet[1385]: I1101 09:19:26.899282    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d5b72a56-2397-4702-8443-4b854af93d01-tmp\") pod \"storage-provisioner\" (UID: \"d5b72a56-2397-4702-8443-4b854af93d01\") " pod="kube-system/storage-provisioner"
	Nov 01 09:19:26 old-k8s-version-152344 kubelet[1385]: I1101 09:19:26.899319    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cds4z\" (UniqueName: \"kubernetes.io/projected/5ec9963a-a709-4a14-a266-039c4d3d9ebe-kube-api-access-cds4z\") pod \"coredns-5dd5756b68-gcvgr\" (UID: \"5ec9963a-a709-4a14-a266-039c4d3d9ebe\") " pod="kube-system/coredns-5dd5756b68-gcvgr"
	Nov 01 09:19:26 old-k8s-version-152344 kubelet[1385]: I1101 09:19:26.899485    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnxn6\" (UniqueName: \"kubernetes.io/projected/d5b72a56-2397-4702-8443-4b854af93d01-kube-api-access-xnxn6\") pod \"storage-provisioner\" (UID: \"d5b72a56-2397-4702-8443-4b854af93d01\") " pod="kube-system/storage-provisioner"
	Nov 01 09:19:27 old-k8s-version-152344 kubelet[1385]: I1101 09:19:27.647146    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-gcvgr" podStartSLOduration=14.647089481 podCreationTimestamp="2025-11-01 09:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:19:27.647001441 +0000 UTC m=+27.209967747" watchObservedRunningTime="2025-11-01 09:19:27.647089481 +0000 UTC m=+27.210055788"
	Nov 01 09:19:27 old-k8s-version-152344 kubelet[1385]: I1101 09:19:27.647273    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.647247131 podCreationTimestamp="2025-11-01 09:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:19:27.634386447 +0000 UTC m=+27.197352752" watchObservedRunningTime="2025-11-01 09:19:27.647247131 +0000 UTC m=+27.210213437"
	Nov 01 09:19:29 old-k8s-version-152344 kubelet[1385]: I1101 09:19:29.640031    1385 topology_manager.go:215] "Topology Admit Handler" podUID="9fab4cbc-fb02-4d8c-a42d-3898aed47002" podNamespace="default" podName="busybox"
	Nov 01 09:19:29 old-k8s-version-152344 kubelet[1385]: I1101 09:19:29.817102    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q28l\" (UniqueName: \"kubernetes.io/projected/9fab4cbc-fb02-4d8c-a42d-3898aed47002-kube-api-access-2q28l\") pod \"busybox\" (UID: \"9fab4cbc-fb02-4d8c-a42d-3898aed47002\") " pod="default/busybox"
	
	
	==> storage-provisioner [3171eb91502f479c8717e0f81d472802356ee6c5396a74868f10a88121bb6087] <==
	I1101 09:19:27.143501       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:19:27.152541       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:19:27.152579       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 09:19:27.159457       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:19:27.159652       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-152344_2107f479-406f-4672-b59b-6c6b3f00108c!
	I1101 09:19:27.160346       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d54d713e-3c09-4409-a20a-85838e16fc43", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-152344_2107f479-406f-4672-b59b-6c6b3f00108c became leader
	I1101 09:19:27.260157       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-152344_2107f479-406f-4672-b59b-6c6b3f00108c!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-152344 -n old-k8s-version-152344
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-152344 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-397460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-397460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (258.316967ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:19:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-397460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-397460 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-397460 describe deploy/metrics-server -n kube-system: exit status 1 (60.946981ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-397460 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-397460
helpers_test.go:243: (dbg) docker inspect no-preload-397460:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a",
	        "Created": "2025-11-01T09:18:48.77329288Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 233237,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:18:48.814569606Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a/hosts",
	        "LogPath": "/var/lib/docker/containers/dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a/dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a-json.log",
	        "Name": "/no-preload-397460",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-397460:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-397460",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a",
	                "LowerDir": "/var/lib/docker/overlay2/0b34dab8141c8641f76f199b5dd54ea0b7163a5882ccc5e46e7cd5e259fdb760-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0b34dab8141c8641f76f199b5dd54ea0b7163a5882ccc5e46e7cd5e259fdb760/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0b34dab8141c8641f76f199b5dd54ea0b7163a5882ccc5e46e7cd5e259fdb760/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0b34dab8141c8641f76f199b5dd54ea0b7163a5882ccc5e46e7cd5e259fdb760/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-397460",
	                "Source": "/var/lib/docker/volumes/no-preload-397460/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-397460",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-397460",
	                "name.minikube.sigs.k8s.io": "no-preload-397460",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5d55d140b943942c3e079ad89bf5e6ea29256648db9e5011428e3636490382d8",
	            "SandboxKey": "/var/run/docker/netns/5d55d140b943",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-397460": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:c7:8a:f4:24:e8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc24cbf1ada0b118eca4d07595495e7c99849b988767800f68d20e97764309c9",
	                    "EndpointID": "8b7f0939add5f222b33b4a29baeb8c7080172da63430d3597e84a9c46f2e207d",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-397460",
	                        "dcacf8ef764d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397460 -n no-preload-397460
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-397460 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-397460 logs -n 25: (1.116403949s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-204434 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo docker system info                                                                                                                                                                                                      │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo containerd config dump                                                                                                                                                                                                  │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo crio config                                                                                                                                                                                                             │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ delete  │ -p cilium-204434                                                                                                                                                                                                                              │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:18 UTC │
	│ start   │ -p old-k8s-version-152344 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:19 UTC │
	│ delete  │ -p running-upgrade-274843                                                                                                                                                                                                                     │ running-upgrade-274843 │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:18 UTC │
	│ start   │ -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-152344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ stop    │ -p old-k8s-version-152344 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ start   │ -p cert-expiration-303094 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-303094 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-397460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:19:43
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:19:43.350553  240980 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:19:43.350822  240980 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:19:43.350825  240980 out.go:374] Setting ErrFile to fd 2...
	I1101 09:19:43.350829  240980 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:19:43.351032  240980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:19:43.351545  240980 out.go:368] Setting JSON to false
	I1101 09:19:43.352710  240980 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3731,"bootTime":1761985052,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:19:43.352820  240980 start.go:143] virtualization: kvm guest
	I1101 09:19:43.355033  240980 out.go:179] * [cert-expiration-303094] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:19:43.356772  240980 notify.go:221] Checking for updates...
	I1101 09:19:43.356797  240980 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:19:43.358150  240980 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:19:43.359614  240980 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:19:43.360756  240980 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:19:43.362275  240980 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:19:43.363761  240980 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:19:43.365474  240980 config.go:182] Loaded profile config "cert-expiration-303094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:19:43.366030  240980 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:19:43.391679  240980 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:19:43.391768  240980 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:19:43.449188  240980 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:85 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-01 09:19:43.438691424 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:19:43.449307  240980 docker.go:319] overlay module found
	I1101 09:19:43.450846  240980 out.go:179] * Using the docker driver based on existing profile
	I1101 09:19:43.452041  240980 start.go:309] selected driver: docker
	I1101 09:19:43.452050  240980 start.go:930] validating driver "docker" against &{Name:cert-expiration-303094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-303094 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:19:43.452122  240980 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:19:43.452767  240980 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:19:43.513830  240980 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:85 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-01 09:19:43.504255226 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:19:43.514097  240980 cni.go:84] Creating CNI manager for ""
	I1101 09:19:43.514147  240980 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:19:43.514182  240980 start.go:353] cluster config:
	{Name:cert-expiration-303094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-303094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1101 09:19:43.516062  240980 out.go:179] * Starting "cert-expiration-303094" primary control-plane node in "cert-expiration-303094" cluster
	I1101 09:19:43.517395  240980 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:19:43.518559  240980 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:19:43.519683  240980 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:19:43.519724  240980 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:19:43.519757  240980 cache.go:59] Caching tarball of preloaded images
	I1101 09:19:43.519780  240980 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:19:43.519895  240980 preload.go:233] Found /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:19:43.519905  240980 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:19:43.520032  240980 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/cert-expiration-303094/config.json ...
	I1101 09:19:43.542496  240980 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:19:43.542516  240980 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:19:43.542534  240980 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:19:43.542564  240980 start.go:360] acquireMachinesLock for cert-expiration-303094: {Name:mkaa9194868d6e5ad00394efa161b20a73290890 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:19:43.542623  240980 start.go:364] duration metric: took 46.043µs to acquireMachinesLock for "cert-expiration-303094"
	I1101 09:19:43.542638  240980 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:19:43.542643  240980 fix.go:54] fixHost starting: 
	I1101 09:19:43.542847  240980 cli_runner.go:164] Run: docker container inspect cert-expiration-303094 --format={{.State.Status}}
	I1101 09:19:43.561042  240980 fix.go:112] recreateIfNeeded on cert-expiration-303094: state=Running err=<nil>
	W1101 09:19:43.561064  240980 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:19:43.563084  240980 out.go:252] * Updating the running docker "cert-expiration-303094" container ...
	I1101 09:19:43.563112  240980 machine.go:94] provisionDockerMachine start ...
	I1101 09:19:43.563211  240980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-303094
	I1101 09:19:43.582028  240980 main.go:143] libmachine: Using SSH client type: native
	I1101 09:19:43.582323  240980 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1101 09:19:43.582332  240980 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:19:43.725246  240980 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-303094
	
	I1101 09:19:43.725264  240980 ubuntu.go:182] provisioning hostname "cert-expiration-303094"
	I1101 09:19:43.725342  240980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-303094
	I1101 09:19:43.744621  240980 main.go:143] libmachine: Using SSH client type: native
	I1101 09:19:43.744839  240980 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1101 09:19:43.744846  240980 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-303094 && echo "cert-expiration-303094" | sudo tee /etc/hostname
	I1101 09:19:43.897017  240980 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-303094
	
	I1101 09:19:43.897086  240980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-303094
	I1101 09:19:43.916387  240980 main.go:143] libmachine: Using SSH client type: native
	I1101 09:19:43.916591  240980 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1101 09:19:43.916602  240980 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-303094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-303094/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-303094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:19:44.060903  240980 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:19:44.060923  240980 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5913/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5913/.minikube}
	I1101 09:19:44.060958  240980 ubuntu.go:190] setting up certificates
	I1101 09:19:44.060969  240980 provision.go:84] configureAuth start
	I1101 09:19:44.061021  240980 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-303094
	I1101 09:19:44.079428  240980 provision.go:143] copyHostCerts
	I1101 09:19:44.079475  240980 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem, removing ...
	I1101 09:19:44.079487  240980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem
	I1101 09:19:44.079550  240980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem (1078 bytes)
	I1101 09:19:44.079660  240980 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem, removing ...
	I1101 09:19:44.079665  240980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem
	I1101 09:19:44.079694  240980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem (1123 bytes)
	I1101 09:19:44.079772  240980 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem, removing ...
	I1101 09:19:44.079775  240980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem
	I1101 09:19:44.079799  240980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem (1675 bytes)
	I1101 09:19:44.079857  240980 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-303094 san=[127.0.0.1 192.168.76.2 cert-expiration-303094 localhost minikube]
	I1101 09:19:44.396345  240980 provision.go:177] copyRemoteCerts
	I1101 09:19:44.396390  240980 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:19:44.396431  240980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-303094
	I1101 09:19:44.415729  240980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/cert-expiration-303094/id_rsa Username:docker}
	I1101 09:19:44.518093  240980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:19:44.536593  240980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 09:19:44.554798  240980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:19:44.574921  240980 provision.go:87] duration metric: took 513.938651ms to configureAuth
	I1101 09:19:44.574940  240980 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:19:44.575103  240980 config.go:182] Loaded profile config "cert-expiration-303094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:19:44.575186  240980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-303094
	I1101 09:19:44.595574  240980 main.go:143] libmachine: Using SSH client type: native
	I1101 09:19:44.595893  240980 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1101 09:19:44.595917  240980 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:19:44.914211  240980 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:19:44.914230  240980 machine.go:97] duration metric: took 1.351110868s to provisionDockerMachine
	I1101 09:19:44.914245  240980 start.go:293] postStartSetup for "cert-expiration-303094" (driver="docker")
	I1101 09:19:44.914259  240980 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:19:44.914326  240980 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:19:44.914380  240980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-303094
	I1101 09:19:44.934090  240980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/cert-expiration-303094/id_rsa Username:docker}
	I1101 09:19:45.035740  240980 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:19:45.039537  240980 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:19:45.039561  240980 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:19:45.039572  240980 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/addons for local assets ...
	I1101 09:19:45.039626  240980 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/files for local assets ...
	I1101 09:19:45.039748  240980 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem -> 94142.pem in /etc/ssl/certs
	I1101 09:19:45.039849  240980 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:19:45.048030  240980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:19:45.066030  240980 start.go:296] duration metric: took 151.771426ms for postStartSetup
	I1101 09:19:45.066101  240980 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:19:45.066138  240980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-303094
	I1101 09:19:45.084582  240980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/cert-expiration-303094/id_rsa Username:docker}
	I1101 09:19:45.182439  240980 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:19:45.187420  240980 fix.go:56] duration metric: took 1.64476679s for fixHost
	I1101 09:19:45.187437  240980 start.go:83] releasing machines lock for "cert-expiration-303094", held for 1.644806504s
	I1101 09:19:45.187493  240980 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-303094
	I1101 09:19:45.206235  240980 ssh_runner.go:195] Run: cat /version.json
	I1101 09:19:45.206270  240980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-303094
	I1101 09:19:45.206328  240980 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:19:45.206377  240980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-303094
	I1101 09:19:45.225610  240980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/cert-expiration-303094/id_rsa Username:docker}
	I1101 09:19:45.226814  240980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/cert-expiration-303094/id_rsa Username:docker}
	I1101 09:19:45.380994  240980 ssh_runner.go:195] Run: systemctl --version
	I1101 09:19:45.387806  240980 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:19:45.425401  240980 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:19:45.430423  240980 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:19:45.430702  240980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:19:45.440875  240980 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:19:45.440892  240980 start.go:496] detecting cgroup driver to use...
	I1101 09:19:45.440925  240980 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:19:45.440970  240980 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:19:45.456142  240980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:19:45.469923  240980 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:19:45.469965  240980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:19:45.487165  240980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:19:45.501952  240980 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:19:45.622448  240980 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:19:45.753015  240980 docker.go:234] disabling docker service ...
	I1101 09:19:45.753074  240980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:19:45.774289  240980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:19:45.788905  240980 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:19:45.911625  240980 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:19:46.024962  240980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:19:46.037793  240980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:19:46.052651  240980 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:19:46.052694  240980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:19:46.062068  240980 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:19:46.062120  240980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:19:46.071199  240980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:19:46.080689  240980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:19:46.090062  240980 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:19:46.098343  240980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:19:46.107599  240980 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:19:46.116312  240980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:19:46.125691  240980 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:19:46.133830  240980 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:19:46.141929  240980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:19:46.252135  240980 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:19:46.403825  240980 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:19:46.403890  240980 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:19:46.408213  240980 start.go:564] Will wait 60s for crictl version
	I1101 09:19:46.408259  240980 ssh_runner.go:195] Run: which crictl
	I1101 09:19:46.412313  240980 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:19:46.436952  240980 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:19:46.437016  240980 ssh_runner.go:195] Run: crio --version
	I1101 09:19:46.466097  240980 ssh_runner.go:195] Run: crio --version
	I1101 09:19:46.495975  240980 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Nov 01 09:19:35 no-preload-397460 crio[765]: time="2025-11-01T09:19:35.815999387Z" level=info msg="Started container" PID=2913 containerID=8eb5ac9af33026a2ef7e69455cc5ee6a057c57a58a8210cc21786f8a3c3c3382 description=kube-system/storage-provisioner/storage-provisioner id=885f5eeb-2c74-4ebb-8f42-bc0781c45113 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c8cc6120492fb9643fc1c390f6243f03e5e3436c0650ed62b95b8083d5e29adb
	Nov 01 09:19:35 no-preload-397460 crio[765]: time="2025-11-01T09:19:35.817209984Z" level=info msg="Started container" PID=2914 containerID=2f2f8673d374ba3599ef4b7bf7ea559f481c6b9b65e6a19e01c19f04c31d253f description=kube-system/coredns-66bc5c9577-z5578/coredns id=c6b5088a-38a6-463c-8168-362174398f5a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b911a630dd10ff3f414da9593934ea4232c01e7e3798b7f71e4b19129f495226
	Nov 01 09:19:38 no-preload-397460 crio[765]: time="2025-11-01T09:19:38.301923813Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a7dffb9d-2825-4f5f-8cdd-8dd94d9edd63 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:19:38 no-preload-397460 crio[765]: time="2025-11-01T09:19:38.302073274Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:19:38 no-preload-397460 crio[765]: time="2025-11-01T09:19:38.307541217Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3430b00f2e1c8532cd83f063b45d2f9b3ffe350d1c6557003a33fd7427112179 UID:943cd842-e356-47ea-82aa-89be0c4ca0ca NetNS:/var/run/netns/24e7d43c-e021-4cc6-8547-5db94220c8e7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008a2480}] Aliases:map[]}"
	Nov 01 09:19:38 no-preload-397460 crio[765]: time="2025-11-01T09:19:38.307576536Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:19:38 no-preload-397460 crio[765]: time="2025-11-01T09:19:38.317055873Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3430b00f2e1c8532cd83f063b45d2f9b3ffe350d1c6557003a33fd7427112179 UID:943cd842-e356-47ea-82aa-89be0c4ca0ca NetNS:/var/run/netns/24e7d43c-e021-4cc6-8547-5db94220c8e7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008a2480}] Aliases:map[]}"
	Nov 01 09:19:38 no-preload-397460 crio[765]: time="2025-11-01T09:19:38.317208993Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 09:19:38 no-preload-397460 crio[765]: time="2025-11-01T09:19:38.317936678Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:19:38 no-preload-397460 crio[765]: time="2025-11-01T09:19:38.318712687Z" level=info msg="Ran pod sandbox 3430b00f2e1c8532cd83f063b45d2f9b3ffe350d1c6557003a33fd7427112179 with infra container: default/busybox/POD" id=a7dffb9d-2825-4f5f-8cdd-8dd94d9edd63 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:19:38 no-preload-397460 crio[765]: time="2025-11-01T09:19:38.319980889Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=168b28ac-dc2f-4292-900e-7959cd36f207 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:19:38 no-preload-397460 crio[765]: time="2025-11-01T09:19:38.320113161Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=168b28ac-dc2f-4292-900e-7959cd36f207 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:19:38 no-preload-397460 crio[765]: time="2025-11-01T09:19:38.320157676Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=168b28ac-dc2f-4292-900e-7959cd36f207 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:19:38 no-preload-397460 crio[765]: time="2025-11-01T09:19:38.320663714Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ea472978-5d30-4bdc-a75f-daa9e70d7e1f name=/runtime.v1.ImageService/PullImage
	Nov 01 09:19:38 no-preload-397460 crio[765]: time="2025-11-01T09:19:38.32216196Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 09:19:39 no-preload-397460 crio[765]: time="2025-11-01T09:19:39.024202927Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=ea472978-5d30-4bdc-a75f-daa9e70d7e1f name=/runtime.v1.ImageService/PullImage
	Nov 01 09:19:39 no-preload-397460 crio[765]: time="2025-11-01T09:19:39.024843967Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7f5eae7d-7589-4aed-9f4a-6baaae8c230b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:19:39 no-preload-397460 crio[765]: time="2025-11-01T09:19:39.026298365Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=79f6297c-5379-40cb-817f-0de270a16857 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:19:39 no-preload-397460 crio[765]: time="2025-11-01T09:19:39.029574117Z" level=info msg="Creating container: default/busybox/busybox" id=0ef2bf11-4d5e-435c-a215-2c6187f67816 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:19:39 no-preload-397460 crio[765]: time="2025-11-01T09:19:39.029697106Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:19:39 no-preload-397460 crio[765]: time="2025-11-01T09:19:39.034313855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:19:39 no-preload-397460 crio[765]: time="2025-11-01T09:19:39.034928709Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:19:39 no-preload-397460 crio[765]: time="2025-11-01T09:19:39.05920962Z" level=info msg="Created container 0d249e83f7a9345da2a6c064e60da5440f0209b1dd9fdff6e9d8b59b0db4c389: default/busybox/busybox" id=0ef2bf11-4d5e-435c-a215-2c6187f67816 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:19:39 no-preload-397460 crio[765]: time="2025-11-01T09:19:39.059902261Z" level=info msg="Starting container: 0d249e83f7a9345da2a6c064e60da5440f0209b1dd9fdff6e9d8b59b0db4c389" id=eab61d64-f907-4abf-bdce-c448ff450af7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:19:39 no-preload-397460 crio[765]: time="2025-11-01T09:19:39.061669859Z" level=info msg="Started container" PID=2993 containerID=0d249e83f7a9345da2a6c064e60da5440f0209b1dd9fdff6e9d8b59b0db4c389 description=default/busybox/busybox id=eab61d64-f907-4abf-bdce-c448ff450af7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3430b00f2e1c8532cd83f063b45d2f9b3ffe350d1c6557003a33fd7427112179
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	0d249e83f7a93       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   3430b00f2e1c8       busybox                                     default
	2f2f8673d374b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   b911a630dd10f       coredns-66bc5c9577-z5578                    kube-system
	8eb5ac9af3302       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   c8cc6120492fb       storage-provisioner                         kube-system
	7c53a0e3a4e22       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   d00d7f14fa318       kindnet-lddf5                               kube-system
	eb100e7629c8f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      25 seconds ago      Running             kube-proxy                0                   a1cb1f1c1f197       kube-proxy-5kpft                            kube-system
	52eb52da8704e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      36 seconds ago      Running             kube-scheduler            0                   afec918c92e16       kube-scheduler-no-preload-397460            kube-system
	0056faffad2d1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      36 seconds ago      Running             kube-apiserver            0                   6dbba4d6ba336       kube-apiserver-no-preload-397460            kube-system
	c503baed05c39       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      36 seconds ago      Running             kube-controller-manager   0                   e72523499b1fd       kube-controller-manager-no-preload-397460   kube-system
	2bc65aeac994f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      36 seconds ago      Running             etcd                      0                   adbe77b8adf70       etcd-no-preload-397460                      kube-system
	
	
	==> coredns [2f2f8673d374ba3599ef4b7bf7ea559f481c6b9b65e6a19e01c19f04c31d253f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35222 - 61816 "HINFO IN 5881945721060823233.8473185993604615655. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.50665994s
	
	
	==> describe nodes <==
	Name:               no-preload-397460
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-397460
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=no-preload-397460
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_19_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:19:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-397460
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:19:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:19:47 +0000   Sat, 01 Nov 2025 09:19:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:19:47 +0000   Sat, 01 Nov 2025 09:19:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:19:47 +0000   Sat, 01 Nov 2025 09:19:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:19:47 +0000   Sat, 01 Nov 2025 09:19:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-397460
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                782711df-25d2-4083-899f-9ab94eb16882
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-z5578                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-397460                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-lddf5                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-397460             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-397460    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-5kpft                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-397460             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node no-preload-397460 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node no-preload-397460 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node no-preload-397460 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node no-preload-397460 event: Registered Node no-preload-397460 in Controller
	  Normal  NodeReady                13s   kubelet          Node no-preload-397460 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.047726] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[Nov 1 08:38] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.215674] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.047939] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000023] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023915] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023867] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023880] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000033] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023884] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +4.031524] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +8.255123] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	
	
	==> etcd [2bc65aeac994fad169b9a5b0c6fb4eb666925920ae58f4683c72821415ed32a7] <==
	{"level":"warn","ts":"2025-11-01T09:19:13.236690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.244738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.253596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.261428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.267711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.274562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.281534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.288185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.296782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.303856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.311512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.319218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.326897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.334915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.342079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.348641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.356230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.364387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.371850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.380678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.395630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.412342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.420715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.427848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:19:13.480600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55164","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:19:48 up  1:02,  0 user,  load average: 3.92, 2.49, 1.45
	Linux no-preload-397460 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7c53a0e3a4e229d3c3e3cacf663c531d2f60f643db1e8ec4a81eeb34557daba4] <==
	I1101 09:19:24.710309       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:19:24.710568       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 09:19:24.710701       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:19:24.710715       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:19:24.710744       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:19:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:19:25.105215       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:19:25.105389       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:19:25.105407       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:19:25.105599       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:19:25.505638       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:19:25.505671       1 metrics.go:72] Registering metrics
	I1101 09:19:25.505766       1 controller.go:711] "Syncing nftables rules"
	I1101 09:19:35.010420       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:19:35.010500       1 main.go:301] handling current node
	I1101 09:19:45.007534       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:19:45.007566       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0056faffad2d1c7286d97cc5d6470062e4be0ff9df6971cf527bf374d4204b66] <==
	I1101 09:19:14.239162       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:19:14.257973       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 09:19:14.259214       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:19:14.267621       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:19:14.267821       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:19:14.292053       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:19:14.302682       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 09:19:15.139758       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:19:15.147001       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:19:15.147024       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:19:15.760496       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:19:15.808570       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:19:15.940766       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:19:15.949492       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1101 09:19:15.950849       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:19:15.955225       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:19:16.308933       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:19:17.094554       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:19:17.105515       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:19:17.112480       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:19:22.062021       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:19:22.165241       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:19:22.169672       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:19:22.411123       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1101 09:19:47.088578       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:54196: use of closed network connection
	
	
	==> kube-controller-manager [c503baed05c390a54895b4af448e92f7b492944e74b8ad716e6bb4e0c5d6c1e7] <==
	I1101 09:19:21.308297       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 09:19:21.309470       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:19:21.309514       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:19:21.309523       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:19:21.309533       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:19:21.309573       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:19:21.309586       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:19:21.309603       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:19:21.309782       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:19:21.309800       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:19:21.309810       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:19:21.309838       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:19:21.310089       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:19:21.310131       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:19:21.311614       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:19:21.312439       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 09:19:21.313593       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:19:21.313652       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:19:21.313696       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:19:21.313708       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:19:21.313716       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:19:21.314854       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:19:21.320350       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-397460" podCIDRs=["10.244.0.0/24"]
	I1101 09:19:21.336433       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:19:36.276746       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [eb100e7629c8ff8fbfcfbd69dc9e5a2a82d2912f8764bb307cf681d94d3f8cba] <==
	I1101 09:19:22.857337       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:19:22.925312       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:19:23.025773       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:19:23.025825       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1101 09:19:23.025930       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:19:23.046492       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:19:23.046551       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:19:23.052188       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:19:23.052511       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:19:23.052544       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:19:23.054137       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:19:23.054156       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:19:23.054175       1 config.go:309] "Starting node config controller"
	I1101 09:19:23.054186       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:19:23.054189       1 config.go:200] "Starting service config controller"
	I1101 09:19:23.054193       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:19:23.054195       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:19:23.054271       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:19:23.054282       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:19:23.155199       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:19:23.155219       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:19:23.155203       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [52eb52da8704edc96be0879fa7778f1dc68634f6a438ff816796ab488edfd2e1] <==
	E1101 09:19:14.236423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:19:14.236472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:19:14.236506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:19:14.236524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:19:14.236584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:19:14.236615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:19:14.236713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:19:14.236743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:19:14.236772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:19:14.236827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:19:14.236830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:19:14.237084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:19:15.079183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 09:19:15.096207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:19:15.158361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:19:15.197188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:19:15.253820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:19:15.273224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:19:15.291133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:19:15.451579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:19:15.457856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:19:15.496070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:19:15.530837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:19:15.531740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1101 09:19:18.322539       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:19:17 no-preload-397460 kubelet[2306]: E1101 09:19:17.950175    2306 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-no-preload-397460\" already exists" pod="kube-system/kube-apiserver-no-preload-397460"
	Nov 01 09:19:17 no-preload-397460 kubelet[2306]: I1101 09:19:17.964104    2306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-397460" podStartSLOduration=0.964089088 podStartE2EDuration="964.089088ms" podCreationTimestamp="2025-11-01 09:19:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:19:17.964064398 +0000 UTC m=+1.129628981" watchObservedRunningTime="2025-11-01 09:19:17.964089088 +0000 UTC m=+1.129653661"
	Nov 01 09:19:17 no-preload-397460 kubelet[2306]: I1101 09:19:17.974257    2306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-397460" podStartSLOduration=0.974236885 podStartE2EDuration="974.236885ms" podCreationTimestamp="2025-11-01 09:19:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:19:17.974223759 +0000 UTC m=+1.139788342" watchObservedRunningTime="2025-11-01 09:19:17.974236885 +0000 UTC m=+1.139801461"
	Nov 01 09:19:17 no-preload-397460 kubelet[2306]: I1101 09:19:17.983830    2306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-397460" podStartSLOduration=0.98380895 podStartE2EDuration="983.80895ms" podCreationTimestamp="2025-11-01 09:19:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:19:17.98379753 +0000 UTC m=+1.149362113" watchObservedRunningTime="2025-11-01 09:19:17.98380895 +0000 UTC m=+1.149373532"
	Nov 01 09:19:17 no-preload-397460 kubelet[2306]: I1101 09:19:17.994203    2306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-397460" podStartSLOduration=0.994187261 podStartE2EDuration="994.187261ms" podCreationTimestamp="2025-11-01 09:19:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:19:17.994147411 +0000 UTC m=+1.159711993" watchObservedRunningTime="2025-11-01 09:19:17.994187261 +0000 UTC m=+1.159751843"
	Nov 01 09:19:21 no-preload-397460 kubelet[2306]: I1101 09:19:21.335152    2306 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 09:19:21 no-preload-397460 kubelet[2306]: I1101 09:19:21.335802    2306 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 09:19:22 no-preload-397460 kubelet[2306]: I1101 09:19:22.542559    2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/788827b1-dfc6-4921-a791-13a752d335aa-xtables-lock\") pod \"kube-proxy-5kpft\" (UID: \"788827b1-dfc6-4921-a791-13a752d335aa\") " pod="kube-system/kube-proxy-5kpft"
	Nov 01 09:19:22 no-preload-397460 kubelet[2306]: I1101 09:19:22.542599    2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/85b09376-b18b-444d-8405-a7045c3732dc-cni-cfg\") pod \"kindnet-lddf5\" (UID: \"85b09376-b18b-444d-8405-a7045c3732dc\") " pod="kube-system/kindnet-lddf5"
	Nov 01 09:19:22 no-preload-397460 kubelet[2306]: I1101 09:19:22.542617    2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85b09376-b18b-444d-8405-a7045c3732dc-xtables-lock\") pod \"kindnet-lddf5\" (UID: \"85b09376-b18b-444d-8405-a7045c3732dc\") " pod="kube-system/kindnet-lddf5"
	Nov 01 09:19:22 no-preload-397460 kubelet[2306]: I1101 09:19:22.542637    2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85b09376-b18b-444d-8405-a7045c3732dc-lib-modules\") pod \"kindnet-lddf5\" (UID: \"85b09376-b18b-444d-8405-a7045c3732dc\") " pod="kube-system/kindnet-lddf5"
	Nov 01 09:19:22 no-preload-397460 kubelet[2306]: I1101 09:19:22.542651    2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwqfn\" (UniqueName: \"kubernetes.io/projected/85b09376-b18b-444d-8405-a7045c3732dc-kube-api-access-cwqfn\") pod \"kindnet-lddf5\" (UID: \"85b09376-b18b-444d-8405-a7045c3732dc\") " pod="kube-system/kindnet-lddf5"
	Nov 01 09:19:22 no-preload-397460 kubelet[2306]: I1101 09:19:22.542671    2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/788827b1-dfc6-4921-a791-13a752d335aa-lib-modules\") pod \"kube-proxy-5kpft\" (UID: \"788827b1-dfc6-4921-a791-13a752d335aa\") " pod="kube-system/kube-proxy-5kpft"
	Nov 01 09:19:22 no-preload-397460 kubelet[2306]: I1101 09:19:22.542706    2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8sf4\" (UniqueName: \"kubernetes.io/projected/788827b1-dfc6-4921-a791-13a752d335aa-kube-api-access-l8sf4\") pod \"kube-proxy-5kpft\" (UID: \"788827b1-dfc6-4921-a791-13a752d335aa\") " pod="kube-system/kube-proxy-5kpft"
	Nov 01 09:19:22 no-preload-397460 kubelet[2306]: I1101 09:19:22.542810    2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/788827b1-dfc6-4921-a791-13a752d335aa-kube-proxy\") pod \"kube-proxy-5kpft\" (UID: \"788827b1-dfc6-4921-a791-13a752d335aa\") " pod="kube-system/kube-proxy-5kpft"
	Nov 01 09:19:22 no-preload-397460 kubelet[2306]: I1101 09:19:22.962121    2306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5kpft" podStartSLOduration=0.962097676 podStartE2EDuration="962.097676ms" podCreationTimestamp="2025-11-01 09:19:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:19:22.96189881 +0000 UTC m=+6.127463386" watchObservedRunningTime="2025-11-01 09:19:22.962097676 +0000 UTC m=+6.127662258"
	Nov 01 09:19:24 no-preload-397460 kubelet[2306]: I1101 09:19:24.970177    2306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-lddf5" podStartSLOduration=1.201295348 podStartE2EDuration="2.970156698s" podCreationTimestamp="2025-11-01 09:19:22 +0000 UTC" firstStartedPulling="2025-11-01 09:19:22.746880074 +0000 UTC m=+5.912444636" lastFinishedPulling="2025-11-01 09:19:24.515741424 +0000 UTC m=+7.681305986" observedRunningTime="2025-11-01 09:19:24.96982671 +0000 UTC m=+8.135391293" watchObservedRunningTime="2025-11-01 09:19:24.970156698 +0000 UTC m=+8.135721282"
	Nov 01 09:19:35 no-preload-397460 kubelet[2306]: I1101 09:19:35.437928    2306 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 09:19:35 no-preload-397460 kubelet[2306]: I1101 09:19:35.542173    2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4245\" (UniqueName: \"kubernetes.io/projected/5bbebf5b-a427-4501-881c-fc445ff4054c-kube-api-access-v4245\") pod \"coredns-66bc5c9577-z5578\" (UID: \"5bbebf5b-a427-4501-881c-fc445ff4054c\") " pod="kube-system/coredns-66bc5c9577-z5578"
	Nov 01 09:19:35 no-preload-397460 kubelet[2306]: I1101 09:19:35.542231    2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bbebf5b-a427-4501-881c-fc445ff4054c-config-volume\") pod \"coredns-66bc5c9577-z5578\" (UID: \"5bbebf5b-a427-4501-881c-fc445ff4054c\") " pod="kube-system/coredns-66bc5c9577-z5578"
	Nov 01 09:19:35 no-preload-397460 kubelet[2306]: I1101 09:19:35.542339    2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8c7273a1-68fa-4783-948d-41e29d4fc406-tmp\") pod \"storage-provisioner\" (UID: \"8c7273a1-68fa-4783-948d-41e29d4fc406\") " pod="kube-system/storage-provisioner"
	Nov 01 09:19:35 no-preload-397460 kubelet[2306]: I1101 09:19:35.542888    2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqc2j\" (UniqueName: \"kubernetes.io/projected/8c7273a1-68fa-4783-948d-41e29d4fc406-kube-api-access-wqc2j\") pod \"storage-provisioner\" (UID: \"8c7273a1-68fa-4783-948d-41e29d4fc406\") " pod="kube-system/storage-provisioner"
	Nov 01 09:19:35 no-preload-397460 kubelet[2306]: I1101 09:19:35.995732    2306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-z5578" podStartSLOduration=13.995711337 podStartE2EDuration="13.995711337s" podCreationTimestamp="2025-11-01 09:19:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:19:35.995352287 +0000 UTC m=+19.160916870" watchObservedRunningTime="2025-11-01 09:19:35.995711337 +0000 UTC m=+19.161275919"
	Nov 01 09:19:36 no-preload-397460 kubelet[2306]: I1101 09:19:36.018620    2306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.018594435 podStartE2EDuration="13.018594435s" podCreationTimestamp="2025-11-01 09:19:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:19:36.008440019 +0000 UTC m=+19.174004596" watchObservedRunningTime="2025-11-01 09:19:36.018594435 +0000 UTC m=+19.184159018"
	Nov 01 09:19:38 no-preload-397460 kubelet[2306]: I1101 09:19:38.059717    2306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xnc7\" (UniqueName: \"kubernetes.io/projected/943cd842-e356-47ea-82aa-89be0c4ca0ca-kube-api-access-2xnc7\") pod \"busybox\" (UID: \"943cd842-e356-47ea-82aa-89be0c4ca0ca\") " pod="default/busybox"
	
	
	==> storage-provisioner [8eb5ac9af33026a2ef7e69455cc5ee6a057c57a58a8210cc21786f8a3c3c3382] <==
	I1101 09:19:35.831018       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:19:35.840926       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:19:35.841036       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:19:35.843405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:19:35.850157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:19:35.850372       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:19:35.850554       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-397460_025f0cc9-2cd8-4ff0-b431-68f1c41d0e99!
	I1101 09:19:35.851046       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b6e7d43-9839-4337-9d32-0088bf11071a", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-397460_025f0cc9-2cd8-4ff0-b431-68f1c41d0e99 became leader
	W1101 09:19:35.853803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:19:35.859263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:19:35.951552       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-397460_025f0cc9-2cd8-4ff0-b431-68f1c41d0e99!
	W1101 09:19:37.863499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:19:37.868437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:19:39.871706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:19:39.876858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:19:41.880124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:19:41.884073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:19:43.886818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:19:43.891046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:19:45.894463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:19:45.899974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:19:47.904199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:19:47.910998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-397460 -n no-preload-397460
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-397460 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-236314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-236314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (261.437029ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:20:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-236314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-236314 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-236314 describe deploy/metrics-server -n kube-system: exit status 1 (59.920423ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-236314 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-236314
helpers_test.go:243: (dbg) docker inspect embed-certs-236314:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64",
	        "Created": "2025-11-01T09:19:56.919781471Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 244736,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:19:56.968988085Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64/hosts",
	        "LogPath": "/var/lib/docker/containers/9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64/9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64-json.log",
	        "Name": "/embed-certs-236314",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-236314:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-236314",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64",
	                "LowerDir": "/var/lib/docker/overlay2/058db38a3e51e77a68a2911f27d674e0411b25d26e2fe50bb66959a3e62a7c04-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/058db38a3e51e77a68a2911f27d674e0411b25d26e2fe50bb66959a3e62a7c04/merged",
	                "UpperDir": "/var/lib/docker/overlay2/058db38a3e51e77a68a2911f27d674e0411b25d26e2fe50bb66959a3e62a7c04/diff",
	                "WorkDir": "/var/lib/docker/overlay2/058db38a3e51e77a68a2911f27d674e0411b25d26e2fe50bb66959a3e62a7c04/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-236314",
	                "Source": "/var/lib/docker/volumes/embed-certs-236314/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-236314",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-236314",
	                "name.minikube.sigs.k8s.io": "embed-certs-236314",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f9d3b0e356597d9503aa6d4a19d16d81815275b3e9377f50d9a2ba043a202535",
	            "SandboxKey": "/var/run/docker/netns/f9d3b0e35659",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-236314": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:b4:be:58:c3:0e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2f536846b22cd19ee4958cff8ea6caf971d5b2fed6041edde3ccc625d2886d4f",
	                    "EndpointID": "6d63c86d519f859bd1be98882e87fe5f0afded0e18a6d8cd0bbeffdb59d680e7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-236314",
	                        "9e1a1d183903"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-236314 -n embed-certs-236314
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-236314 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-236314 logs -n 25: (1.07670236s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-204434 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo containerd config dump                                                                                                                                                                                                  │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo crio config                                                                                                                                                                                                             │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ delete  │ -p cilium-204434                                                                                                                                                                                                                              │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:18 UTC │
	│ start   │ -p old-k8s-version-152344 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:19 UTC │
	│ delete  │ -p running-upgrade-274843                                                                                                                                                                                                                     │ running-upgrade-274843 │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:18 UTC │
	│ start   │ -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-152344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ stop    │ -p old-k8s-version-152344 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ start   │ -p cert-expiration-303094 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-303094 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-397460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ delete  │ -p cert-expiration-303094                                                                                                                                                                                                                     │ cert-expiration-303094 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ stop    │ -p no-preload-397460 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p embed-certs-236314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-152344 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ start   │ -p old-k8s-version-152344 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-397460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-236314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:20:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:20:07.824121  248920 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:20:07.824244  248920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:20:07.824257  248920 out.go:374] Setting ErrFile to fd 2...
	I1101 09:20:07.824262  248920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:20:07.824561  248920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:20:07.825183  248920 out.go:368] Setting JSON to false
	I1101 09:20:07.826751  248920 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3756,"bootTime":1761985052,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:20:07.826905  248920 start.go:143] virtualization: kvm guest
	I1101 09:20:07.829083  248920 out.go:179] * [no-preload-397460] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:20:07.831155  248920 notify.go:221] Checking for updates...
	I1101 09:20:07.831752  248920 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:20:07.833569  248920 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:20:07.835025  248920 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:20:07.836469  248920 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:20:07.837570  248920 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:20:07.838682  248920 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:20:07.840719  248920 config.go:182] Loaded profile config "no-preload-397460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:20:07.841416  248920 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:20:07.880637  248920 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:20:07.880766  248920 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:20:07.968788  248920 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 09:20:07.954680855 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:20:07.968985  248920 docker.go:319] overlay module found
	I1101 09:20:07.970832  248920 out.go:179] * Using the docker driver based on existing profile
	I1101 09:20:07.972148  248920 start.go:309] selected driver: docker
	I1101 09:20:07.972165  248920 start.go:930] validating driver "docker" against &{Name:no-preload-397460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-397460 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:20:07.972309  248920 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:20:07.973100  248920 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:20:08.061146  248920 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 09:20:08.038811987 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:20:08.061451  248920 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:20:08.061481  248920 cni.go:84] Creating CNI manager for ""
	I1101 09:20:08.061530  248920 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:20:08.061566  248920 start.go:353] cluster config:
	{Name:no-preload-397460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-397460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:20:08.064212  248920 out.go:179] * Starting "no-preload-397460" primary control-plane node in "no-preload-397460" cluster
	I1101 09:20:08.065504  248920 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:20:08.066959  248920 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:20:08.077802  248920 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:20:08.078003  248920 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/config.json ...
	I1101 09:20:08.078426  248920 cache.go:107] acquiring lock: {Name:mk3da340e5af70247539f8d922cc7bcce42509cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:20:08.078516  248920 cache.go:115] /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1101 09:20:08.078525  248920 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 115.294µs
	I1101 09:20:08.078539  248920 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1101 09:20:08.078613  248920 cache.go:107] acquiring lock: {Name:mkeb0cf358eb16140604b4a70399a3a029115110 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:20:08.078679  248920 cache.go:115] /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1101 09:20:08.078691  248920 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 85.118µs
	I1101 09:20:08.078701  248920 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1101 09:20:08.078675  248920 cache.go:107] acquiring lock: {Name:mk5f63b0ef2d772b57fd677bfc33b86408c18616 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:20:08.078733  248920 cache.go:107] acquiring lock: {Name:mk62a069595cd51732c873403f79b944c968023c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:20:08.078776  248920 cache.go:115] /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1101 09:20:08.078786  248920 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 127.516µs
	I1101 09:20:08.078762  248920 cache.go:107] acquiring lock: {Name:mkd3b96e72872fe46da4959b7624b5cd21026b8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:20:08.078817  248920 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1101 09:20:08.078826  248920 cache.go:107] acquiring lock: {Name:mk653da29d9cc7e07521281dd09bd564dc663636 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:20:08.078847  248920 cache.go:107] acquiring lock: {Name:mk7e6b43fbb8c3177a2ddbf45490c8c23268d610 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:20:08.078885  248920 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:20:08.078968  248920 cache.go:115] /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1101 09:20:08.078968  248920 cache.go:115] /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1101 09:20:08.078795  248920 cache.go:115] /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1101 09:20:08.078995  248920 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 278.663µs
	I1101 09:20:08.078995  248920 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 151.785µs
	I1101 09:20:08.079011  248920 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1101 09:20:08.078983  248920 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 160.263µs
	I1101 09:20:08.079022  248920 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1101 09:20:08.078968  248920 cache.go:115] /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1101 09:20:08.079033  248920 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 317.249µs
	I1101 09:20:08.079044  248920 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1101 09:20:08.079004  248920 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1101 09:20:08.079280  248920 cache.go:107] acquiring lock: {Name:mkd44e5d327380ad6f0bfcd24859998cff83b1da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:20:08.079358  248920 cache.go:115] /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1101 09:20:08.079371  248920 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 98.837µs
	I1101 09:20:08.079379  248920 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21835-5913/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1101 09:20:08.079388  248920 cache.go:87] Successfully saved all images to host disk.
	I1101 09:20:08.108479  248920 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:20:08.108563  248920 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:20:08.108589  248920 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:20:08.108632  248920 start.go:360] acquireMachinesLock for no-preload-397460: {Name:mk53345d4b51e8783ff01ad93264377536fe034e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:20:08.108697  248920 start.go:364] duration metric: took 45.087µs to acquireMachinesLock for "no-preload-397460"
	I1101 09:20:08.108720  248920 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:20:08.108730  248920 fix.go:54] fixHost starting: 
	I1101 09:20:08.109009  248920 cli_runner.go:164] Run: docker container inspect no-preload-397460 --format={{.State.Status}}
	I1101 09:20:08.130269  248920 fix.go:112] recreateIfNeeded on no-preload-397460: state=Stopped err=<nil>
	W1101 09:20:08.130328  248920 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:20:07.694334  245059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.748715283s)
	I1101 09:20:07.694405  245059 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.748347256s)
	I1101 09:20:07.694431  245059 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-152344" to be "Ready" ...
	I1101 09:20:07.694946  245059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.727783015s)
	I1101 09:20:07.711036  245059 node_ready.go:49] node "old-k8s-version-152344" is "Ready"
	I1101 09:20:07.711067  245059 node_ready.go:38] duration metric: took 16.621829ms for node "old-k8s-version-152344" to be "Ready" ...
	I1101 09:20:07.711082  245059 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:20:07.711134  245059 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:20:08.204004  245059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.108481952s)
	I1101 09:20:08.204244  245059 api_server.go:72] duration metric: took 3.450802654s to wait for apiserver process to appear ...
	I1101 09:20:08.204259  245059 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:20:08.204347  245059 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 09:20:08.207534  245059 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-152344 addons enable metrics-server
	
	I1101 09:20:08.209323  245059 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1101 09:20:07.069955  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:20:07.070371  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:20:07.070422  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:20:07.070484  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:20:07.107704  216020 cri.go:89] found id: "f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:07.107737  216020 cri.go:89] found id: ""
	I1101 09:20:07.107747  216020 logs.go:282] 1 containers: [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2]
	I1101 09:20:07.107804  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:07.112496  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:20:07.112579  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:20:07.145420  216020 cri.go:89] found id: ""
	I1101 09:20:07.145448  216020 logs.go:282] 0 containers: []
	W1101 09:20:07.145459  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:20:07.145467  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:20:07.145524  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:20:07.181673  216020 cri.go:89] found id: ""
	I1101 09:20:07.181701  216020 logs.go:282] 0 containers: []
	W1101 09:20:07.181711  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:20:07.181718  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:20:07.181776  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:20:07.212238  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:20:07.212262  216020 cri.go:89] found id: ""
	I1101 09:20:07.212271  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:20:07.212330  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:07.216407  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:20:07.216479  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:20:07.245595  216020 cri.go:89] found id: ""
	I1101 09:20:07.245620  216020 logs.go:282] 0 containers: []
	W1101 09:20:07.245629  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:20:07.245637  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:20:07.245700  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:20:07.281736  216020 cri.go:89] found id: "e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677"
	I1101 09:20:07.281764  216020 cri.go:89] found id: ""
	I1101 09:20:07.281775  216020 logs.go:282] 1 containers: [e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677]
	I1101 09:20:07.281927  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:07.287281  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:20:07.287345  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:20:07.325475  216020 cri.go:89] found id: ""
	I1101 09:20:07.325501  216020 logs.go:282] 0 containers: []
	W1101 09:20:07.325519  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:20:07.325527  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:20:07.325580  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:20:07.392343  216020 cri.go:89] found id: ""
	I1101 09:20:07.392382  216020 logs.go:282] 0 containers: []
	W1101 09:20:07.392393  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:20:07.392404  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:20:07.392419  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:20:07.457477  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:20:07.457507  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:20:07.609568  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:20:07.609662  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:20:07.634481  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:20:07.634520  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:20:07.725685  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:20:07.725709  216020 logs.go:123] Gathering logs for kube-apiserver [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2] ...
	I1101 09:20:07.725728  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:07.774960  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:20:07.774999  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:20:07.842944  216020 logs.go:123] Gathering logs for kube-controller-manager [e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677] ...
	I1101 09:20:07.842976  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677"
	I1101 09:20:07.882521  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:20:07.882552  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:20:08.209889  245059 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1101 09:20:08.211034  245059 addons.go:515] duration metric: took 3.457552637s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1101 09:20:08.211053  245059 api_server.go:141] control plane version: v1.28.0
	I1101 09:20:08.211076  245059 api_server.go:131] duration metric: took 6.810482ms to wait for apiserver health ...
	I1101 09:20:08.211086  245059 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:20:08.215355  245059 system_pods.go:59] 8 kube-system pods found
	I1101 09:20:08.215409  245059 system_pods.go:61] "coredns-5dd5756b68-gcvgr" [5ec9963a-a709-4a14-a266-039c4d3d9ebe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:20:08.215423  245059 system_pods.go:61] "etcd-old-k8s-version-152344" [973f94b6-5289-43a5-a31e-f998c94609af] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:20:08.215437  245059 system_pods.go:61] "kindnet-9lbnx" [cdc79b68-caf2-4ddb-afff-7391a2e1402f] Running
	I1101 09:20:08.215449  245059 system_pods.go:61] "kube-apiserver-old-k8s-version-152344" [a39e0ba4-51e7-45c4-b176-5b3801cc2f23] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:20:08.215462  245059 system_pods.go:61] "kube-controller-manager-old-k8s-version-152344" [6617a056-e5f0-49b1-8c5c-5d2f293183ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:20:08.215470  245059 system_pods.go:61] "kube-proxy-w5hpl" [dccd7023-4810-4cc3-9ebd-d7fe6cffce88] Running
	I1101 09:20:08.215479  245059 system_pods.go:61] "kube-scheduler-old-k8s-version-152344" [7cab04d5-174b-4b38-a984-98ebc0ab7983] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:20:08.215488  245059 system_pods.go:61] "storage-provisioner" [d5b72a56-2397-4702-8443-4b854af93d01] Running
	I1101 09:20:08.215497  245059 system_pods.go:74] duration metric: took 4.396779ms to wait for pod list to return data ...
	I1101 09:20:08.215506  245059 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:20:08.218600  245059 default_sa.go:45] found service account: "default"
	I1101 09:20:08.218621  245059 default_sa.go:55] duration metric: took 3.106704ms for default service account to be created ...
	I1101 09:20:08.218630  245059 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:20:08.222014  245059 system_pods.go:86] 8 kube-system pods found
	I1101 09:20:08.222048  245059 system_pods.go:89] "coredns-5dd5756b68-gcvgr" [5ec9963a-a709-4a14-a266-039c4d3d9ebe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:20:08.222058  245059 system_pods.go:89] "etcd-old-k8s-version-152344" [973f94b6-5289-43a5-a31e-f998c94609af] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:20:08.222063  245059 system_pods.go:89] "kindnet-9lbnx" [cdc79b68-caf2-4ddb-afff-7391a2e1402f] Running
	I1101 09:20:08.222070  245059 system_pods.go:89] "kube-apiserver-old-k8s-version-152344" [a39e0ba4-51e7-45c4-b176-5b3801cc2f23] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:20:08.222085  245059 system_pods.go:89] "kube-controller-manager-old-k8s-version-152344" [6617a056-e5f0-49b1-8c5c-5d2f293183ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:20:08.222090  245059 system_pods.go:89] "kube-proxy-w5hpl" [dccd7023-4810-4cc3-9ebd-d7fe6cffce88] Running
	I1101 09:20:08.222101  245059 system_pods.go:89] "kube-scheduler-old-k8s-version-152344" [7cab04d5-174b-4b38-a984-98ebc0ab7983] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:20:08.222104  245059 system_pods.go:89] "storage-provisioner" [d5b72a56-2397-4702-8443-4b854af93d01] Running
	I1101 09:20:08.222113  245059 system_pods.go:126] duration metric: took 3.478188ms to wait for k8s-apps to be running ...
	I1101 09:20:08.222120  245059 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:20:08.222165  245059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:20:08.236751  245059 system_svc.go:56] duration metric: took 14.610343ms WaitForService to wait for kubelet
	I1101 09:20:08.236783  245059 kubeadm.go:587] duration metric: took 3.483340857s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:20:08.236840  245059 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:20:08.239785  245059 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:20:08.239808  245059 node_conditions.go:123] node cpu capacity is 8
	I1101 09:20:08.239820  245059 node_conditions.go:105] duration metric: took 2.975311ms to run NodePressure ...
	I1101 09:20:08.239832  245059 start.go:242] waiting for startup goroutines ...
	I1101 09:20:08.239838  245059 start.go:247] waiting for cluster config update ...
	I1101 09:20:08.239848  245059 start.go:256] writing updated cluster config ...
	I1101 09:20:08.240136  245059 ssh_runner.go:195] Run: rm -f paused
	I1101 09:20:08.245843  245059 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:20:08.252272  245059 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-gcvgr" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:20:10.258818  245059 pod_ready.go:104] pod "coredns-5dd5756b68-gcvgr" is not "Ready", error: <nil>
	W1101 09:20:12.259506  245059 pod_ready.go:104] pod "coredns-5dd5756b68-gcvgr" is not "Ready", error: <nil>
	I1101 09:20:08.133002  248920 out.go:252] * Restarting existing docker container for "no-preload-397460" ...
	I1101 09:20:08.133080  248920 cli_runner.go:164] Run: docker start no-preload-397460
	I1101 09:20:08.413283  248920 cli_runner.go:164] Run: docker container inspect no-preload-397460 --format={{.State.Status}}
	I1101 09:20:08.434276  248920 kic.go:430] container "no-preload-397460" state is running.
	I1101 09:20:08.434844  248920 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-397460
	I1101 09:20:08.456223  248920 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/config.json ...
	I1101 09:20:08.456556  248920 machine.go:94] provisionDockerMachine start ...
	I1101 09:20:08.456658  248920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:20:08.476823  248920 main.go:143] libmachine: Using SSH client type: native
	I1101 09:20:08.477197  248920 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1101 09:20:08.477213  248920 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:20:08.478001  248920 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35708->127.0.0.1:33073: read: connection reset by peer
	I1101 09:20:11.620031  248920 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-397460
	
	I1101 09:20:11.620059  248920 ubuntu.go:182] provisioning hostname "no-preload-397460"
	I1101 09:20:11.620121  248920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:20:11.639135  248920 main.go:143] libmachine: Using SSH client type: native
	I1101 09:20:11.639341  248920 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1101 09:20:11.639354  248920 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-397460 && echo "no-preload-397460" | sudo tee /etc/hostname
	I1101 09:20:11.793777  248920 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-397460
	
	I1101 09:20:11.793883  248920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:20:11.816081  248920 main.go:143] libmachine: Using SSH client type: native
	I1101 09:20:11.816361  248920 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1101 09:20:11.816404  248920 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-397460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-397460/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-397460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:20:11.968003  248920 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:20:11.968031  248920 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5913/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5913/.minikube}
	I1101 09:20:11.968074  248920 ubuntu.go:190] setting up certificates
	I1101 09:20:11.968089  248920 provision.go:84] configureAuth start
	I1101 09:20:11.968144  248920 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-397460
	I1101 09:20:11.989787  248920 provision.go:143] copyHostCerts
	I1101 09:20:11.989845  248920 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem, removing ...
	I1101 09:20:11.989860  248920 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem
	I1101 09:20:11.989962  248920 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem (1078 bytes)
	I1101 09:20:11.990106  248920 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem, removing ...
	I1101 09:20:11.990120  248920 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem
	I1101 09:20:11.990155  248920 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem (1123 bytes)
	I1101 09:20:11.990227  248920 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem, removing ...
	I1101 09:20:11.990234  248920 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem
	I1101 09:20:11.990263  248920 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem (1675 bytes)
	I1101 09:20:11.990352  248920 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem org=jenkins.no-preload-397460 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-397460]
	I1101 09:20:12.576679  248920 provision.go:177] copyRemoteCerts
	I1101 09:20:12.576740  248920 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:20:12.576772  248920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:20:12.595352  248920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/no-preload-397460/id_rsa Username:docker}
	I1101 09:20:12.697818  248920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:20:12.719320  248920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 09:20:12.740895  248920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:20:12.762415  248920 provision.go:87] duration metric: took 794.301611ms to configureAuth
	I1101 09:20:12.762449  248920 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:20:12.762607  248920 config.go:182] Loaded profile config "no-preload-397460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:20:12.762720  248920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:20:12.783043  248920 main.go:143] libmachine: Using SSH client type: native
	I1101 09:20:12.783357  248920 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1101 09:20:12.783384  248920 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:20:14.089293  243720 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:20:14.089369  243720 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:20:14.089478  243720 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:20:14.089559  243720 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 09:20:14.089600  243720 kubeadm.go:319] OS: Linux
	I1101 09:20:14.089672  243720 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:20:14.089755  243720 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:20:14.089820  243720 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:20:14.090713  243720 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:20:14.090793  243720 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:20:14.090890  243720 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:20:14.090954  243720 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:20:14.091043  243720 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 09:20:14.091146  243720 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:20:14.091275  243720 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:20:14.091391  243720 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:20:14.091474  243720 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:20:14.093357  243720 out.go:252]   - Generating certificates and keys ...
	I1101 09:20:14.093465  243720 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:20:14.093573  243720 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:20:14.093661  243720 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:20:14.093731  243720 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:20:14.093810  243720 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:20:14.093890  243720 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:20:14.093990  243720 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:20:14.094151  243720 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-236314 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 09:20:14.094199  243720 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:20:14.094335  243720 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-236314 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 09:20:14.094393  243720 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:20:14.094447  243720 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:20:14.094490  243720 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:20:14.094547  243720 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:20:14.094591  243720 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:20:14.094640  243720 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:20:14.094701  243720 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:20:14.094757  243720 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:20:14.094816  243720 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:20:14.094922  243720 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:20:14.095005  243720 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:20:14.096558  243720 out.go:252]   - Booting up control plane ...
	I1101 09:20:14.096680  243720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:20:14.096783  243720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:20:14.096893  243720 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:20:14.097069  243720 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:20:14.097219  243720 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:20:14.097389  243720 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:20:14.097523  243720 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:20:14.097590  243720 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:20:14.097789  243720 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:20:14.097971  243720 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:20:14.098063  243720 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001324026s
	I1101 09:20:14.098201  243720 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:20:14.098311  243720 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1101 09:20:14.098418  243720 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:20:14.098541  243720 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:20:14.098684  243720 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.325178512s
	I1101 09:20:14.098786  243720 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.637551848s
	I1101 09:20:14.098909  243720 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.50145985s
	I1101 09:20:14.099080  243720 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:20:14.099284  243720 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:20:14.099378  243720 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:20:14.099667  243720 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-236314 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:20:14.099766  243720 kubeadm.go:319] [bootstrap-token] Using token: gssfgv.jqrlt9glauno6hsk
	I1101 09:20:14.101316  243720 out.go:252]   - Configuring RBAC rules ...
	I1101 09:20:14.101468  243720 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:20:14.101606  243720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:20:14.101828  243720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:20:14.102035  243720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:20:14.102217  243720 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:20:14.102363  243720 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:20:14.102555  243720 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:20:14.102621  243720 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:20:14.102689  243720 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:20:14.102703  243720 kubeadm.go:319] 
	I1101 09:20:14.102786  243720 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:20:14.102798  243720 kubeadm.go:319] 
	I1101 09:20:14.102924  243720 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:20:14.102935  243720 kubeadm.go:319] 
	I1101 09:20:14.102970  243720 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:20:14.103055  243720 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:20:14.103125  243720 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:20:14.103133  243720 kubeadm.go:319] 
	I1101 09:20:14.103202  243720 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:20:14.103209  243720 kubeadm.go:319] 
	I1101 09:20:14.103271  243720 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:20:14.103280  243720 kubeadm.go:319] 
	I1101 09:20:14.103347  243720 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:20:14.103468  243720 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:20:14.103577  243720 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:20:14.103592  243720 kubeadm.go:319] 
	I1101 09:20:14.103735  243720 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:20:14.103900  243720 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:20:14.103915  243720 kubeadm.go:319] 
	I1101 09:20:14.104003  243720 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token gssfgv.jqrlt9glauno6hsk \
	I1101 09:20:14.104149  243720 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 \
	I1101 09:20:14.104187  243720 kubeadm.go:319] 	--control-plane 
	I1101 09:20:14.104197  243720 kubeadm.go:319] 
	I1101 09:20:14.104318  243720 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:20:14.104328  243720 kubeadm.go:319] 
	I1101 09:20:14.104452  243720 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token gssfgv.jqrlt9glauno6hsk \
	I1101 09:20:14.104629  243720 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 
	I1101 09:20:14.104647  243720 cni.go:84] Creating CNI manager for ""
	I1101 09:20:14.104657  243720 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:20:14.106266  243720 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:20:13.104698  248920 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:20:13.104729  248920 machine.go:97] duration metric: took 4.648154557s to provisionDockerMachine
	I1101 09:20:13.104744  248920 start.go:293] postStartSetup for "no-preload-397460" (driver="docker")
	I1101 09:20:13.104758  248920 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:20:13.107111  248920 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:20:13.107569  248920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:20:13.131359  248920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/no-preload-397460/id_rsa Username:docker}
	I1101 09:20:13.237156  248920 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:20:13.241084  248920 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:20:13.241111  248920 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:20:13.241122  248920 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/addons for local assets ...
	I1101 09:20:13.241196  248920 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/files for local assets ...
	I1101 09:20:13.241292  248920 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem -> 94142.pem in /etc/ssl/certs
	I1101 09:20:13.241404  248920 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:20:13.251101  248920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:20:13.273312  248920 start.go:296] duration metric: took 168.551385ms for postStartSetup
	I1101 09:20:13.273401  248920 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:20:13.273463  248920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:20:13.294402  248920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/no-preload-397460/id_rsa Username:docker}
	I1101 09:20:13.400488  248920 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:20:13.405759  248920 fix.go:56] duration metric: took 5.297021938s for fixHost
	I1101 09:20:13.405788  248920 start.go:83] releasing machines lock for "no-preload-397460", held for 5.29707754s
	I1101 09:20:13.405858  248920 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-397460
	I1101 09:20:13.424825  248920 ssh_runner.go:195] Run: cat /version.json
	I1101 09:20:13.424905  248920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:20:13.424931  248920 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:20:13.424999  248920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:20:13.444775  248920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/no-preload-397460/id_rsa Username:docker}
	I1101 09:20:13.445060  248920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/no-preload-397460/id_rsa Username:docker}
	I1101 09:20:13.548547  248920 ssh_runner.go:195] Run: systemctl --version
	I1101 09:20:13.603217  248920 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:20:13.641966  248920 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:20:13.646938  248920 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:20:13.647008  248920 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:20:13.656202  248920 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:20:13.656229  248920 start.go:496] detecting cgroup driver to use...
	I1101 09:20:13.656259  248920 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:20:13.656297  248920 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:20:13.672199  248920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:20:13.687447  248920 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:20:13.687500  248920 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:20:13.704163  248920 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:20:13.718844  248920 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:20:13.811549  248920 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:20:13.906510  248920 docker.go:234] disabling docker service ...
	I1101 09:20:13.906587  248920 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:20:13.924462  248920 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:20:13.939198  248920 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:20:14.029766  248920 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:20:14.132979  248920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:20:14.146879  248920 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:20:14.163240  248920 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:20:14.163303  248920 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:20:14.174196  248920 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:20:14.174276  248920 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:20:14.185419  248920 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:20:14.197000  248920 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:20:14.208317  248920 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:20:14.219150  248920 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:20:14.231884  248920 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:20:14.243191  248920 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:20:14.254600  248920 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:20:14.264318  248920 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:20:14.273416  248920 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:20:14.389979  248920 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:20:14.516334  248920 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:20:14.516410  248920 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:20:14.521217  248920 start.go:564] Will wait 60s for crictl version
	I1101 09:20:14.521279  248920 ssh_runner.go:195] Run: which crictl
	I1101 09:20:14.525461  248920 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:20:14.554083  248920 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:20:14.554179  248920 ssh_runner.go:195] Run: crio --version
	I1101 09:20:14.583300  248920 ssh_runner.go:195] Run: crio --version
	I1101 09:20:14.614825  248920 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:20:10.473709  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:20:10.474167  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:20:10.474221  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:20:10.474281  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:20:10.504207  216020 cri.go:89] found id: "f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:10.504231  216020 cri.go:89] found id: ""
	I1101 09:20:10.504241  216020 logs.go:282] 1 containers: [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2]
	I1101 09:20:10.504296  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:10.508418  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:20:10.508488  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:20:10.545610  216020 cri.go:89] found id: ""
	I1101 09:20:10.545641  216020 logs.go:282] 0 containers: []
	W1101 09:20:10.545652  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:20:10.545661  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:20:10.545732  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:20:10.577320  216020 cri.go:89] found id: ""
	I1101 09:20:10.577357  216020 logs.go:282] 0 containers: []
	W1101 09:20:10.577371  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:20:10.577379  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:20:10.577438  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:20:10.609735  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:20:10.609757  216020 cri.go:89] found id: ""
	I1101 09:20:10.609765  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:20:10.609818  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:10.614568  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:20:10.614644  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:20:10.669083  216020 cri.go:89] found id: ""
	I1101 09:20:10.669115  216020 logs.go:282] 0 containers: []
	W1101 09:20:10.669126  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:20:10.669134  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:20:10.669193  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:20:10.706569  216020 cri.go:89] found id: "e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677"
	I1101 09:20:10.706596  216020 cri.go:89] found id: ""
	I1101 09:20:10.706605  216020 logs.go:282] 1 containers: [e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677]
	I1101 09:20:10.706662  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:10.714971  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:20:10.715043  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:20:10.752475  216020 cri.go:89] found id: ""
	I1101 09:20:10.752504  216020 logs.go:282] 0 containers: []
	W1101 09:20:10.752515  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:20:10.752525  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:20:10.752588  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:20:10.807533  216020 cri.go:89] found id: ""
	I1101 09:20:10.807619  216020 logs.go:282] 0 containers: []
	W1101 09:20:10.807647  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:20:10.807662  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:20:10.807677  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:20:10.843294  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:20:10.843320  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:20:10.948899  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:20:10.948939  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:20:10.964679  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:20:10.964709  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:20:11.023529  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:20:11.023556  216020 logs.go:123] Gathering logs for kube-apiserver [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2] ...
	I1101 09:20:11.023570  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:11.056373  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:20:11.056408  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:20:11.103571  216020 logs.go:123] Gathering logs for kube-controller-manager [e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677] ...
	I1101 09:20:11.103603  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677"
	I1101 09:20:11.131577  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:20:11.131602  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:20:13.686250  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:20:13.686630  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:20:13.686683  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:20:13.686728  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:20:13.715152  216020 cri.go:89] found id: "f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:13.715177  216020 cri.go:89] found id: ""
	I1101 09:20:13.715185  216020 logs.go:282] 1 containers: [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2]
	I1101 09:20:13.715237  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:13.719655  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:20:13.719713  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:20:13.749238  216020 cri.go:89] found id: ""
	I1101 09:20:13.749275  216020 logs.go:282] 0 containers: []
	W1101 09:20:13.749287  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:20:13.749296  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:20:13.749361  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:20:13.787283  216020 cri.go:89] found id: ""
	I1101 09:20:13.787313  216020 logs.go:282] 0 containers: []
	W1101 09:20:13.787324  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:20:13.787331  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:20:13.787386  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:20:13.816486  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:20:13.816506  216020 cri.go:89] found id: ""
	I1101 09:20:13.816514  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:20:13.816559  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:13.820835  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:20:13.820918  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:20:13.855414  216020 cri.go:89] found id: ""
	I1101 09:20:13.855442  216020 logs.go:282] 0 containers: []
	W1101 09:20:13.855454  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:20:13.855536  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:20:13.855603  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:20:13.885718  216020 cri.go:89] found id: "e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677"
	I1101 09:20:13.885742  216020 cri.go:89] found id: ""
	I1101 09:20:13.885749  216020 logs.go:282] 1 containers: [e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677]
	I1101 09:20:13.885794  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:13.890312  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:20:13.890390  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:20:13.919995  216020 cri.go:89] found id: ""
	I1101 09:20:13.920018  216020 logs.go:282] 0 containers: []
	W1101 09:20:13.920027  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:20:13.920035  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:20:13.920092  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:20:13.950901  216020 cri.go:89] found id: ""
	I1101 09:20:13.950927  216020 logs.go:282] 0 containers: []
	W1101 09:20:13.950935  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:20:13.950943  216020 logs.go:123] Gathering logs for kube-controller-manager [e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677] ...
	I1101 09:20:13.950962  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677"
	I1101 09:20:13.986512  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:20:13.986546  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:20:14.041182  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:20:14.041223  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:20:14.079466  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:20:14.079501  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:20:14.171016  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:20:14.171054  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:20:14.190005  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:20:14.190037  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:20:14.264146  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:20:14.264169  216020 logs.go:123] Gathering logs for kube-apiserver [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2] ...
	I1101 09:20:14.264184  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:14.302229  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:20:14.302268  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:20:14.616018  248920 cli_runner.go:164] Run: docker network inspect no-preload-397460 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:20:14.635170  248920 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1101 09:20:14.639617  248920 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:20:14.650488  248920 kubeadm.go:884] updating cluster {Name:no-preload-397460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-397460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:20:14.650590  248920 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:20:14.650618  248920 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:20:14.683488  248920 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:20:14.683507  248920 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:20:14.683514  248920 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1101 09:20:14.683597  248920 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-397460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-397460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:20:14.683654  248920 ssh_runner.go:195] Run: crio config
	I1101 09:20:14.730778  248920 cni.go:84] Creating CNI manager for ""
	I1101 09:20:14.730803  248920 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:20:14.730820  248920 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:20:14.730845  248920 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-397460 NodeName:no-preload-397460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:20:14.730998  248920 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-397460"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:20:14.731056  248920 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:20:14.740183  248920 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:20:14.740251  248920 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:20:14.748523  248920 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 09:20:14.763548  248920 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:20:14.777018  248920 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1101 09:20:14.791107  248920 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:20:14.795024  248920 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:20:14.806532  248920 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:20:14.891728  248920 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:20:14.914603  248920 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460 for IP: 192.168.94.2
	I1101 09:20:14.914621  248920 certs.go:195] generating shared ca certs ...
	I1101 09:20:14.914643  248920 certs.go:227] acquiring lock for ca certs: {Name:mkfdee6a84670347521013ebeef165551380cb9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:20:14.914796  248920 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key
	I1101 09:20:14.914843  248920 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key
	I1101 09:20:14.914858  248920 certs.go:257] generating profile certs ...
	I1101 09:20:14.915010  248920 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/client.key
	I1101 09:20:14.915100  248920 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/apiserver.key.7741ef4f
	I1101 09:20:14.915151  248920 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/proxy-client.key
	I1101 09:20:14.915286  248920 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem (1338 bytes)
	W1101 09:20:14.915328  248920 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414_empty.pem, impossibly tiny 0 bytes
	I1101 09:20:14.915342  248920 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:20:14.915374  248920 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:20:14.915413  248920 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:20:14.915445  248920 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem (1675 bytes)
	I1101 09:20:14.915507  248920 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:20:14.916308  248920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:20:14.950319  248920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:20:14.970987  248920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:20:14.995005  248920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:20:15.020561  248920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 09:20:15.042803  248920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:20:15.062273  248920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:20:15.080756  248920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:20:15.099681  248920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /usr/share/ca-certificates/94142.pem (1708 bytes)
	I1101 09:20:15.119099  248920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:20:15.139312  248920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem --> /usr/share/ca-certificates/9414.pem (1338 bytes)
	I1101 09:20:15.157324  248920 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:20:15.170749  248920 ssh_runner.go:195] Run: openssl version
	I1101 09:20:15.177304  248920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94142.pem && ln -fs /usr/share/ca-certificates/94142.pem /etc/ssl/certs/94142.pem"
	I1101 09:20:15.187601  248920 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94142.pem
	I1101 09:20:15.191608  248920 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:35 /usr/share/ca-certificates/94142.pem
	I1101 09:20:15.191683  248920 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94142.pem
	I1101 09:20:15.227612  248920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94142.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:20:15.236602  248920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:20:15.245777  248920 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:20:15.249687  248920 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:20:15.249775  248920 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:20:15.287639  248920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:20:15.296656  248920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9414.pem && ln -fs /usr/share/ca-certificates/9414.pem /etc/ssl/certs/9414.pem"
	I1101 09:20:15.306457  248920 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9414.pem
	I1101 09:20:15.310618  248920 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:35 /usr/share/ca-certificates/9414.pem
	I1101 09:20:15.310677  248920 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9414.pem
	I1101 09:20:15.348513  248920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9414.pem /etc/ssl/certs/51391683.0"
	I1101 09:20:15.357390  248920 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:20:15.361677  248920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:20:15.398646  248920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:20:15.436324  248920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:20:15.484513  248920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:20:15.544722  248920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:20:15.603816  248920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:20:15.659395  248920 kubeadm.go:401] StartCluster: {Name:no-preload-397460 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-397460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:20:15.659498  248920 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:20:15.659544  248920 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:20:15.694366  248920 cri.go:89] found id: "fd6c2e567397890a7f512b409824d618cc40968b819d316cdae2a59eaaeee805"
	I1101 09:20:15.694388  248920 cri.go:89] found id: "1e65eafe05118922eba4075b65156c842b7bb2e5dc4b74d48586e74e8830e4ad"
	I1101 09:20:15.694392  248920 cri.go:89] found id: "1b44261dff64d3c4c47d37a621f72a585d7a917cf393d156278c5cbcf49d2100"
	I1101 09:20:15.694395  248920 cri.go:89] found id: "a154077d09e972273696e9d1d20b891c240a792171f425c23b57e8599069bf1b"
	I1101 09:20:15.694397  248920 cri.go:89] found id: ""
	I1101 09:20:15.694445  248920 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:20:15.708789  248920 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:20:15Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:20:15.708883  248920 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:20:15.719904  248920 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:20:15.720003  248920 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:20:15.720116  248920 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:20:15.728943  248920 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:20:15.729723  248920 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-397460" does not appear in /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:20:15.730217  248920 kubeconfig.go:62] /home/jenkins/minikube-integration/21835-5913/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-397460" cluster setting kubeconfig missing "no-preload-397460" context setting]
	I1101 09:20:15.730957  248920 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:20:15.732801  248920 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:20:15.742034  248920 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1101 09:20:15.742073  248920 kubeadm.go:602] duration metric: took 22.059825ms to restartPrimaryControlPlane
	I1101 09:20:15.742083  248920 kubeadm.go:403] duration metric: took 82.695583ms to StartCluster
	I1101 09:20:15.742102  248920 settings.go:142] acquiring lock: {Name:mkb1ba7d0d4bb15f3f0746ce486d72703f901580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:20:15.742168  248920 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:20:15.743597  248920 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:20:15.743887  248920 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:20:15.743998  248920 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:20:15.744095  248920 config.go:182] Loaded profile config "no-preload-397460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:20:15.744113  248920 addons.go:70] Setting storage-provisioner=true in profile "no-preload-397460"
	I1101 09:20:15.744133  248920 addons.go:239] Setting addon storage-provisioner=true in "no-preload-397460"
	W1101 09:20:15.744145  248920 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:20:15.744145  248920 addons.go:70] Setting dashboard=true in profile "no-preload-397460"
	I1101 09:20:15.744159  248920 addons.go:239] Setting addon dashboard=true in "no-preload-397460"
	W1101 09:20:15.744167  248920 addons.go:248] addon dashboard should already be in state true
	I1101 09:20:15.744174  248920 host.go:66] Checking if "no-preload-397460" exists ...
	I1101 09:20:15.744188  248920 host.go:66] Checking if "no-preload-397460" exists ...
	I1101 09:20:15.744650  248920 cli_runner.go:164] Run: docker container inspect no-preload-397460 --format={{.State.Status}}
	I1101 09:20:15.744687  248920 cli_runner.go:164] Run: docker container inspect no-preload-397460 --format={{.State.Status}}
	I1101 09:20:15.744749  248920 addons.go:70] Setting default-storageclass=true in profile "no-preload-397460"
	I1101 09:20:15.744780  248920 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-397460"
	I1101 09:20:15.745116  248920 cli_runner.go:164] Run: docker container inspect no-preload-397460 --format={{.State.Status}}
	I1101 09:20:15.749991  248920 out.go:179] * Verifying Kubernetes components...
	I1101 09:20:15.754032  248920 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:20:15.771888  248920 addons.go:239] Setting addon default-storageclass=true in "no-preload-397460"
	W1101 09:20:15.771913  248920 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:20:15.771946  248920 host.go:66] Checking if "no-preload-397460" exists ...
	I1101 09:20:15.772357  248920 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 09:20:15.772413  248920 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:20:15.772417  248920 cli_runner.go:164] Run: docker container inspect no-preload-397460 --format={{.State.Status}}
	I1101 09:20:15.774473  248920 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:20:15.774497  248920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:20:15.774561  248920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:20:15.776454  248920 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 09:20:14.107398  243720 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:20:14.111968  243720 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:20:14.111987  243720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:20:14.125551  243720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:20:14.391492  243720 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:20:14.391546  243720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:20:14.391571  243720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-236314 minikube.k8s.io/updated_at=2025_11_01T09_20_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=embed-certs-236314 minikube.k8s.io/primary=true
	I1101 09:20:14.404266  243720 ops.go:34] apiserver oom_adj: -16
	I1101 09:20:14.471910  243720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:20:14.972041  243720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:20:15.472556  243720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:20:15.972572  243720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:20:16.472037  243720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1101 09:20:14.758553  245059 pod_ready.go:104] pod "coredns-5dd5756b68-gcvgr" is not "Ready", error: <nil>
	W1101 09:20:17.259506  245059 pod_ready.go:104] pod "coredns-5dd5756b68-gcvgr" is not "Ready", error: <nil>
	I1101 09:20:15.777449  248920 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 09:20:15.777487  248920 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 09:20:15.777560  248920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:20:15.795598  248920 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:20:15.795632  248920 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:20:15.795733  248920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:20:15.806154  248920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/no-preload-397460/id_rsa Username:docker}
	I1101 09:20:15.807156  248920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/no-preload-397460/id_rsa Username:docker}
	I1101 09:20:15.832583  248920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/no-preload-397460/id_rsa Username:docker}
	I1101 09:20:15.905980  248920 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:20:15.920374  248920 node_ready.go:35] waiting up to 6m0s for node "no-preload-397460" to be "Ready" ...
	I1101 09:20:15.933046  248920 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 09:20:15.933071  248920 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 09:20:15.934312  248920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:20:15.949050  248920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:20:15.950789  248920 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 09:20:15.950813  248920 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 09:20:15.967222  248920 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 09:20:15.967247  248920 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 09:20:15.988679  248920 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 09:20:15.988711  248920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 09:20:16.011657  248920 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 09:20:16.011685  248920 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 09:20:16.030262  248920 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 09:20:16.030287  248920 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 09:20:16.050457  248920 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 09:20:16.050484  248920 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 09:20:16.067181  248920 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 09:20:16.067209  248920 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 09:20:16.082625  248920 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:20:16.082682  248920 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 09:20:16.096691  248920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:20:17.443199  248920 node_ready.go:49] node "no-preload-397460" is "Ready"
	I1101 09:20:17.443242  248920 node_ready.go:38] duration metric: took 1.522834025s for node "no-preload-397460" to be "Ready" ...
	I1101 09:20:17.443260  248920 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:20:17.443316  248920 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:20:18.081766  248920 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.147416614s)
	I1101 09:20:18.081909  248920 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.132772872s)
	I1101 09:20:18.082047  248920 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.985312521s)
	I1101 09:20:18.082068  248920 api_server.go:72] duration metric: took 2.338149068s to wait for apiserver process to appear ...
	I1101 09:20:18.082079  248920 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:20:18.082121  248920 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 09:20:18.083721  248920 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-397460 addons enable metrics-server
	
	I1101 09:20:18.087294  248920 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:20:18.087322  248920 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:20:18.089409  248920 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 09:20:16.972821  243720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:20:17.472005  243720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:20:17.971977  243720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:20:18.472118  243720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:20:18.972205  243720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:20:19.074635  243720 kubeadm.go:1114] duration metric: took 4.683140129s to wait for elevateKubeSystemPrivileges
	I1101 09:20:19.074677  243720 kubeadm.go:403] duration metric: took 17.466721605s to StartCluster
	I1101 09:20:19.074701  243720 settings.go:142] acquiring lock: {Name:mkb1ba7d0d4bb15f3f0746ce486d72703f901580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:20:19.074918  243720 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:20:19.077449  243720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:20:19.077746  243720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:20:19.077760  243720 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:20:19.077818  243720 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:20:19.077936  243720 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-236314"
	I1101 09:20:19.077957  243720 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-236314"
	I1101 09:20:19.078001  243720 host.go:66] Checking if "embed-certs-236314" exists ...
	I1101 09:20:19.078004  243720 config.go:182] Loaded profile config "embed-certs-236314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:20:19.078057  243720 addons.go:70] Setting default-storageclass=true in profile "embed-certs-236314"
	I1101 09:20:19.078073  243720 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-236314"
	I1101 09:20:19.078406  243720 cli_runner.go:164] Run: docker container inspect embed-certs-236314 --format={{.State.Status}}
	I1101 09:20:19.078558  243720 cli_runner.go:164] Run: docker container inspect embed-certs-236314 --format={{.State.Status}}
	I1101 09:20:19.080025  243720 out.go:179] * Verifying Kubernetes components...
	I1101 09:20:19.082905  243720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:20:19.104714  243720 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:20:19.106716  243720 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:20:19.106781  243720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:20:19.106845  243720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:20:19.109010  243720 addons.go:239] Setting addon default-storageclass=true in "embed-certs-236314"
	I1101 09:20:19.109076  243720 host.go:66] Checking if "embed-certs-236314" exists ...
	I1101 09:20:19.109733  243720 cli_runner.go:164] Run: docker container inspect embed-certs-236314 --format={{.State.Status}}
	I1101 09:20:19.146509  243720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/embed-certs-236314/id_rsa Username:docker}
	I1101 09:20:19.152673  243720 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:20:19.152702  243720 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:20:19.152763  243720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:20:19.184032  243720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/embed-certs-236314/id_rsa Username:docker}
	I1101 09:20:19.208083  243720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:20:19.281236  243720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:20:19.291921  243720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:20:19.317020  243720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:20:19.447108  243720 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1101 09:20:19.450361  243720 node_ready.go:35] waiting up to 6m0s for node "embed-certs-236314" to be "Ready" ...
	I1101 09:20:19.677503  243720 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:20:16.871969  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:20:16.872401  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:20:16.872459  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:20:16.872511  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:20:16.904346  216020 cri.go:89] found id: "f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:16.904373  216020 cri.go:89] found id: ""
	I1101 09:20:16.904383  216020 logs.go:282] 1 containers: [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2]
	I1101 09:20:16.904442  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:16.908961  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:20:16.909030  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:20:16.940434  216020 cri.go:89] found id: ""
	I1101 09:20:16.940464  216020 logs.go:282] 0 containers: []
	W1101 09:20:16.940475  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:20:16.940483  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:20:16.940536  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:20:16.980105  216020 cri.go:89] found id: ""
	I1101 09:20:16.980134  216020 logs.go:282] 0 containers: []
	W1101 09:20:16.980145  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:20:16.980152  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:20:16.980211  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:20:17.017238  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:20:17.017259  216020 cri.go:89] found id: ""
	I1101 09:20:17.017272  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:20:17.017332  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:17.022299  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:20:17.022378  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:20:17.059794  216020 cri.go:89] found id: ""
	I1101 09:20:17.059825  216020 logs.go:282] 0 containers: []
	W1101 09:20:17.059842  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:20:17.059854  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:20:17.059936  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:20:17.103149  216020 cri.go:89] found id: "e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677"
	I1101 09:20:17.103176  216020 cri.go:89] found id: ""
	I1101 09:20:17.103185  216020 logs.go:282] 1 containers: [e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677]
	I1101 09:20:17.103246  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:17.107678  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:20:17.107767  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:20:17.136839  216020 cri.go:89] found id: ""
	I1101 09:20:17.136881  216020 logs.go:282] 0 containers: []
	W1101 09:20:17.136892  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:20:17.136900  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:20:17.136962  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:20:17.172318  216020 cri.go:89] found id: ""
	I1101 09:20:17.172345  216020 logs.go:282] 0 containers: []
	W1101 09:20:17.172357  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:20:17.172368  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:20:17.172383  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:20:17.243873  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:20:17.243907  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:20:17.289056  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:20:17.289106  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:20:17.425450  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:20:17.425542  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:20:17.458500  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:20:17.458591  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:20:17.560212  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:20:17.560306  216020 logs.go:123] Gathering logs for kube-apiserver [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2] ...
	I1101 09:20:17.560341  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:17.607317  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:20:17.607353  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:20:17.663053  216020 logs.go:123] Gathering logs for kube-controller-manager [e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677] ...
	I1101 09:20:17.663092  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677"
	I1101 09:20:19.678595  243720 addons.go:515] duration metric: took 600.774783ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:20:19.953556  243720 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-236314" context rescaled to 1 replicas
	W1101 09:20:21.460183  243720 node_ready.go:57] node "embed-certs-236314" has "Ready":"False" status (will retry)
	W1101 09:20:19.260152  245059 pod_ready.go:104] pod "coredns-5dd5756b68-gcvgr" is not "Ready", error: <nil>
	W1101 09:20:21.760136  245059 pod_ready.go:104] pod "coredns-5dd5756b68-gcvgr" is not "Ready", error: <nil>
	I1101 09:20:18.090642  248920 addons.go:515] duration metric: took 2.346644568s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 09:20:18.582255  248920 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 09:20:18.587477  248920 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:20:18.587514  248920 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:20:19.084798  248920 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 09:20:19.093414  248920 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1101 09:20:19.094856  248920 api_server.go:141] control plane version: v1.34.1
	I1101 09:20:19.094907  248920 api_server.go:131] duration metric: took 1.012813026s to wait for apiserver health ...
	I1101 09:20:19.094919  248920 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:20:19.100096  248920 system_pods.go:59] 8 kube-system pods found
	I1101 09:20:19.100143  248920 system_pods.go:61] "coredns-66bc5c9577-z5578" [5bbebf5b-a427-4501-881c-fc445ff4054c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:20:19.100154  248920 system_pods.go:61] "etcd-no-preload-397460" [3aa978e5-6af4-4e61-8352-dfa542467d98] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:20:19.100180  248920 system_pods.go:61] "kindnet-lddf5" [85b09376-b18b-444d-8405-a7045c3732dc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:20:19.100194  248920 system_pods.go:61] "kube-apiserver-no-preload-397460" [d14cbd4e-ca20-4299-b00f-56273156c4c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:20:19.100202  248920 system_pods.go:61] "kube-controller-manager-no-preload-397460" [76ef0890-2a18-4ddd-8196-5a505773f7f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:20:19.100218  248920 system_pods.go:61] "kube-proxy-5kpft" [788827b1-dfc6-4921-a791-13a752d335aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:20:19.100234  248920 system_pods.go:61] "kube-scheduler-no-preload-397460" [e06dfb76-9322-497e-a36a-2320f2103cac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:20:19.100402  248920 system_pods.go:61] "storage-provisioner" [8c7273a1-68fa-4783-948d-41e29d4fc406] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:20:19.100415  248920 system_pods.go:74] duration metric: took 5.487985ms to wait for pod list to return data ...
	I1101 09:20:19.100426  248920 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:20:19.105987  248920 default_sa.go:45] found service account: "default"
	I1101 09:20:19.106020  248920 default_sa.go:55] duration metric: took 5.587114ms for default service account to be created ...
	I1101 09:20:19.106032  248920 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:20:19.111684  248920 system_pods.go:86] 8 kube-system pods found
	I1101 09:20:19.111729  248920 system_pods.go:89] "coredns-66bc5c9577-z5578" [5bbebf5b-a427-4501-881c-fc445ff4054c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:20:19.111742  248920 system_pods.go:89] "etcd-no-preload-397460" [3aa978e5-6af4-4e61-8352-dfa542467d98] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:20:19.111750  248920 system_pods.go:89] "kindnet-lddf5" [85b09376-b18b-444d-8405-a7045c3732dc] Running
	I1101 09:20:19.111760  248920 system_pods.go:89] "kube-apiserver-no-preload-397460" [d14cbd4e-ca20-4299-b00f-56273156c4c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:20:19.111768  248920 system_pods.go:89] "kube-controller-manager-no-preload-397460" [76ef0890-2a18-4ddd-8196-5a505773f7f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:20:19.111778  248920 system_pods.go:89] "kube-proxy-5kpft" [788827b1-dfc6-4921-a791-13a752d335aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:20:19.111787  248920 system_pods.go:89] "kube-scheduler-no-preload-397460" [e06dfb76-9322-497e-a36a-2320f2103cac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:20:19.111796  248920 system_pods.go:89] "storage-provisioner" [8c7273a1-68fa-4783-948d-41e29d4fc406] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:20:19.111806  248920 system_pods.go:126] duration metric: took 5.767317ms to wait for k8s-apps to be running ...
	I1101 09:20:19.111818  248920 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:20:19.111877  248920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:20:19.138114  248920 system_svc.go:56] duration metric: took 26.286261ms WaitForService to wait for kubelet
	I1101 09:20:19.138149  248920 kubeadm.go:587] duration metric: took 3.394229466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:20:19.138176  248920 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:20:19.143378  248920 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:20:19.143425  248920 node_conditions.go:123] node cpu capacity is 8
	I1101 09:20:19.143445  248920 node_conditions.go:105] duration metric: took 5.262317ms to run NodePressure ...
	I1101 09:20:19.143460  248920 start.go:242] waiting for startup goroutines ...
	I1101 09:20:19.143470  248920 start.go:247] waiting for cluster config update ...
	I1101 09:20:19.143485  248920 start.go:256] writing updated cluster config ...
	I1101 09:20:19.144406  248920 ssh_runner.go:195] Run: rm -f paused
	I1101 09:20:19.152832  248920 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:20:19.162221  248920 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z5578" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:20:21.167576  248920 pod_ready.go:104] pod "coredns-66bc5c9577-z5578" is not "Ready", error: <nil>
	I1101 09:20:20.203015  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:20:20.204112  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:20:20.204184  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:20:20.204246  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:20:20.243321  216020 cri.go:89] found id: "f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:20.243347  216020 cri.go:89] found id: ""
	I1101 09:20:20.243356  216020 logs.go:282] 1 containers: [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2]
	I1101 09:20:20.243412  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:20.248685  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:20:20.248767  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:20:20.288247  216020 cri.go:89] found id: ""
	I1101 09:20:20.288278  216020 logs.go:282] 0 containers: []
	W1101 09:20:20.288289  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:20:20.288296  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:20:20.288349  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:20:20.326940  216020 cri.go:89] found id: ""
	I1101 09:20:20.326970  216020 logs.go:282] 0 containers: []
	W1101 09:20:20.326983  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:20:20.326991  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:20:20.327045  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:20:20.364694  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:20:20.364718  216020 cri.go:89] found id: ""
	I1101 09:20:20.364728  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:20:20.364785  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:20.370035  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:20:20.370110  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:20:20.409986  216020 cri.go:89] found id: ""
	I1101 09:20:20.410013  216020 logs.go:282] 0 containers: []
	W1101 09:20:20.410022  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:20:20.410030  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:20:20.410084  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:20:20.450994  216020 cri.go:89] found id: "e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677"
	I1101 09:20:20.451019  216020 cri.go:89] found id: ""
	I1101 09:20:20.451029  216020 logs.go:282] 1 containers: [e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677]
	I1101 09:20:20.451084  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:20.456705  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:20:20.456783  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:20:20.492102  216020 cri.go:89] found id: ""
	I1101 09:20:20.492132  216020 logs.go:282] 0 containers: []
	W1101 09:20:20.492144  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:20:20.492152  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:20:20.492216  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:20:20.528546  216020 cri.go:89] found id: ""
	I1101 09:20:20.528574  216020 logs.go:282] 0 containers: []
	W1101 09:20:20.528585  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:20:20.528596  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:20:20.528610  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:20:20.570655  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:20:20.570689  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:20:20.705956  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:20:20.706005  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:20:20.738519  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:20:20.738553  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:20:20.826748  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:20:20.826776  216020 logs.go:123] Gathering logs for kube-apiserver [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2] ...
	I1101 09:20:20.826792  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:20.872792  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:20:20.872846  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:20:20.937049  216020 logs.go:123] Gathering logs for kube-controller-manager [e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677] ...
	I1101 09:20:20.937084  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677"
	I1101 09:20:20.973950  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:20:20.973984  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:20:23.557937  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:20:23.558374  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:20:23.558441  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:20:23.558581  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:20:23.596338  216020 cri.go:89] found id: "f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:23.596366  216020 cri.go:89] found id: ""
	I1101 09:20:23.596375  216020 logs.go:282] 1 containers: [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2]
	I1101 09:20:23.596435  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:23.602285  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:20:23.602363  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:20:23.642494  216020 cri.go:89] found id: ""
	I1101 09:20:23.642525  216020 logs.go:282] 0 containers: []
	W1101 09:20:23.642537  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:20:23.642544  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:20:23.642601  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:20:23.682422  216020 cri.go:89] found id: ""
	I1101 09:20:23.682454  216020 logs.go:282] 0 containers: []
	W1101 09:20:23.682465  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:20:23.682474  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:20:23.682530  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:20:23.724315  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:20:23.724344  216020 cri.go:89] found id: ""
	I1101 09:20:23.724354  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:20:23.724407  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:23.730781  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:20:23.730944  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:20:23.769735  216020 cri.go:89] found id: ""
	I1101 09:20:23.769764  216020 logs.go:282] 0 containers: []
	W1101 09:20:23.769775  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:20:23.769783  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:20:23.769843  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:20:23.808971  216020 cri.go:89] found id: "df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:20:23.808996  216020 cri.go:89] found id: "e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677"
	I1101 09:20:23.809003  216020 cri.go:89] found id: ""
	I1101 09:20:23.809013  216020 logs.go:282] 2 containers: [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677]
	I1101 09:20:23.809073  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:23.814371  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:23.819380  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:20:23.819458  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:20:23.855688  216020 cri.go:89] found id: ""
	I1101 09:20:23.855728  216020 logs.go:282] 0 containers: []
	W1101 09:20:23.855735  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:20:23.855744  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:20:23.855792  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:20:23.891186  216020 cri.go:89] found id: ""
	I1101 09:20:23.891220  216020 logs.go:282] 0 containers: []
	W1101 09:20:23.891230  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:20:23.891248  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:20:23.891262  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:20:23.968767  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:20:23.968791  216020 logs.go:123] Gathering logs for kube-apiserver [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2] ...
	I1101 09:20:23.968806  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:24.007775  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:20:24.007811  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:20:24.088101  216020 logs.go:123] Gathering logs for kube-controller-manager [e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677] ...
	I1101 09:20:24.088135  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677"
	I1101 09:20:24.125424  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:20:24.125460  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:20:24.204651  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:20:24.204694  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:20:24.340847  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:20:24.340892  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:20:24.364215  216020 logs.go:123] Gathering logs for kube-controller-manager [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd] ...
	I1101 09:20:24.364258  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:20:24.401857  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:20:24.401944  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1101 09:20:23.954293  243720 node_ready.go:57] node "embed-certs-236314" has "Ready":"False" status (will retry)
	W1101 09:20:26.454308  243720 node_ready.go:57] node "embed-certs-236314" has "Ready":"False" status (will retry)
	W1101 09:20:23.761458  245059 pod_ready.go:104] pod "coredns-5dd5756b68-gcvgr" is not "Ready", error: <nil>
	W1101 09:20:26.258816  245059 pod_ready.go:104] pod "coredns-5dd5756b68-gcvgr" is not "Ready", error: <nil>
	W1101 09:20:23.168014  248920 pod_ready.go:104] pod "coredns-66bc5c9577-z5578" is not "Ready", error: <nil>
	W1101 09:20:25.169277  248920 pod_ready.go:104] pod "coredns-66bc5c9577-z5578" is not "Ready", error: <nil>
	W1101 09:20:27.169610  248920 pod_ready.go:104] pod "coredns-66bc5c9577-z5578" is not "Ready", error: <nil>
	I1101 09:20:26.942965  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:20:26.943364  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:20:26.943416  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:20:26.943479  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:20:26.981027  216020 cri.go:89] found id: "f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:26.981052  216020 cri.go:89] found id: ""
	I1101 09:20:26.981062  216020 logs.go:282] 1 containers: [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2]
	I1101 09:20:26.981118  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:26.987191  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:20:26.987263  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:20:27.026447  216020 cri.go:89] found id: ""
	I1101 09:20:27.026475  216020 logs.go:282] 0 containers: []
	W1101 09:20:27.026492  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:20:27.026500  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:20:27.026556  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:20:27.064272  216020 cri.go:89] found id: ""
	I1101 09:20:27.064302  216020 logs.go:282] 0 containers: []
	W1101 09:20:27.064314  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:20:27.064323  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:20:27.064378  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:20:27.104180  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:20:27.104205  216020 cri.go:89] found id: ""
	I1101 09:20:27.104214  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:20:27.104272  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:27.109645  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:20:27.109722  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:20:27.146775  216020 cri.go:89] found id: ""
	I1101 09:20:27.146804  216020 logs.go:282] 0 containers: []
	W1101 09:20:27.146814  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:20:27.146823  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:20:27.146919  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:20:27.186499  216020 cri.go:89] found id: "df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:20:27.186528  216020 cri.go:89] found id: "e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677"
	I1101 09:20:27.186533  216020 cri.go:89] found id: ""
	I1101 09:20:27.186542  216020 logs.go:282] 2 containers: [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677]
	I1101 09:20:27.186597  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:27.192327  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:27.197983  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:20:27.198056  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:20:27.235701  216020 cri.go:89] found id: ""
	I1101 09:20:27.235729  216020 logs.go:282] 0 containers: []
	W1101 09:20:27.235739  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:20:27.235747  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:20:27.235808  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:20:27.272762  216020 cri.go:89] found id: ""
	I1101 09:20:27.272795  216020 logs.go:282] 0 containers: []
	W1101 09:20:27.272806  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:20:27.272822  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:20:27.272836  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:20:27.357577  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:20:27.357602  216020 logs.go:123] Gathering logs for kube-apiserver [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2] ...
	I1101 09:20:27.357618  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:27.404518  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:20:27.404554  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:20:27.474247  216020 logs.go:123] Gathering logs for kube-controller-manager [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd] ...
	I1101 09:20:27.474288  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:20:27.509985  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:20:27.510016  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:20:27.587248  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:20:27.587289  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:20:27.723344  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:20:27.723388  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:20:27.744713  216020 logs.go:123] Gathering logs for kube-controller-manager [e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677] ...
	I1101 09:20:27.744753  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677"
	I1101 09:20:27.782162  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:20:27.782189  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1101 09:20:28.953740  243720 node_ready.go:57] node "embed-certs-236314" has "Ready":"False" status (will retry)
	I1101 09:20:29.953441  243720 node_ready.go:49] node "embed-certs-236314" is "Ready"
	I1101 09:20:29.953478  243720 node_ready.go:38] duration metric: took 10.503072965s for node "embed-certs-236314" to be "Ready" ...
	I1101 09:20:29.953496  243720 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:20:29.953548  243720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:20:29.965815  243720 api_server.go:72] duration metric: took 10.888018784s to wait for apiserver process to appear ...
	I1101 09:20:29.965850  243720 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:20:29.965903  243720 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1101 09:20:29.971269  243720 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1101 09:20:29.972338  243720 api_server.go:141] control plane version: v1.34.1
	I1101 09:20:29.972362  243720 api_server.go:131] duration metric: took 6.50473ms to wait for apiserver health ...
	I1101 09:20:29.972372  243720 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:20:29.975561  243720 system_pods.go:59] 8 kube-system pods found
	I1101 09:20:29.975592  243720 system_pods.go:61] "coredns-66bc5c9577-wwvth" [f1a303c9-2007-4eb4-a08b-c3ea11570c07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:20:29.975597  243720 system_pods.go:61] "etcd-embed-certs-236314" [2339fae7-be07-4318-b821-1b3a5047f474] Running
	I1101 09:20:29.975604  243720 system_pods.go:61] "kindnet-mf8mj" [b2c42bdc-41df-4851-8d25-6810e5020f41] Running
	I1101 09:20:29.975609  243720 system_pods.go:61] "kube-apiserver-embed-certs-236314" [52806dfc-de35-42a4-ba6a-60ba286ebc38] Running
	I1101 09:20:29.975614  243720 system_pods.go:61] "kube-controller-manager-embed-certs-236314" [785690ed-0fec-4e88-8063-6f52dbebf80c] Running
	I1101 09:20:29.975618  243720 system_pods.go:61] "kube-proxy-55ft8" [57a06788-c25e-43a7-9c69-158766d4b46b] Running
	I1101 09:20:29.975631  243720 system_pods.go:61] "kube-scheduler-embed-certs-236314" [f7e5af2e-cce6-470c-b8a5-9e241bfa9a94] Running
	I1101 09:20:29.975647  243720 system_pods.go:61] "storage-provisioner" [5cdd98f3-13ee-4dff-be42-c5c0686106d4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:20:29.975658  243720 system_pods.go:74] duration metric: took 3.279675ms to wait for pod list to return data ...
	I1101 09:20:29.975668  243720 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:20:29.978114  243720 default_sa.go:45] found service account: "default"
	I1101 09:20:29.978134  243720 default_sa.go:55] duration metric: took 2.461072ms for default service account to be created ...
	I1101 09:20:29.978144  243720 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:20:29.981448  243720 system_pods.go:86] 8 kube-system pods found
	I1101 09:20:29.981478  243720 system_pods.go:89] "coredns-66bc5c9577-wwvth" [f1a303c9-2007-4eb4-a08b-c3ea11570c07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:20:29.981484  243720 system_pods.go:89] "etcd-embed-certs-236314" [2339fae7-be07-4318-b821-1b3a5047f474] Running
	I1101 09:20:29.981489  243720 system_pods.go:89] "kindnet-mf8mj" [b2c42bdc-41df-4851-8d25-6810e5020f41] Running
	I1101 09:20:29.981492  243720 system_pods.go:89] "kube-apiserver-embed-certs-236314" [52806dfc-de35-42a4-ba6a-60ba286ebc38] Running
	I1101 09:20:29.981496  243720 system_pods.go:89] "kube-controller-manager-embed-certs-236314" [785690ed-0fec-4e88-8063-6f52dbebf80c] Running
	I1101 09:20:29.981499  243720 system_pods.go:89] "kube-proxy-55ft8" [57a06788-c25e-43a7-9c69-158766d4b46b] Running
	I1101 09:20:29.981509  243720 system_pods.go:89] "kube-scheduler-embed-certs-236314" [f7e5af2e-cce6-470c-b8a5-9e241bfa9a94] Running
	I1101 09:20:29.981527  243720 system_pods.go:89] "storage-provisioner" [5cdd98f3-13ee-4dff-be42-c5c0686106d4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:20:29.981569  243720 retry.go:31] will retry after 187.825869ms: missing components: kube-dns
	I1101 09:20:30.175117  243720 system_pods.go:86] 8 kube-system pods found
	I1101 09:20:30.175163  243720 system_pods.go:89] "coredns-66bc5c9577-wwvth" [f1a303c9-2007-4eb4-a08b-c3ea11570c07] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:20:30.175179  243720 system_pods.go:89] "etcd-embed-certs-236314" [2339fae7-be07-4318-b821-1b3a5047f474] Running
	I1101 09:20:30.175189  243720 system_pods.go:89] "kindnet-mf8mj" [b2c42bdc-41df-4851-8d25-6810e5020f41] Running
	I1101 09:20:30.175195  243720 system_pods.go:89] "kube-apiserver-embed-certs-236314" [52806dfc-de35-42a4-ba6a-60ba286ebc38] Running
	I1101 09:20:30.175207  243720 system_pods.go:89] "kube-controller-manager-embed-certs-236314" [785690ed-0fec-4e88-8063-6f52dbebf80c] Running
	I1101 09:20:30.175217  243720 system_pods.go:89] "kube-proxy-55ft8" [57a06788-c25e-43a7-9c69-158766d4b46b] Running
	I1101 09:20:30.175226  243720 system_pods.go:89] "kube-scheduler-embed-certs-236314" [f7e5af2e-cce6-470c-b8a5-9e241bfa9a94] Running
	I1101 09:20:30.175234  243720 system_pods.go:89] "storage-provisioner" [5cdd98f3-13ee-4dff-be42-c5c0686106d4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:20:30.175259  243720 retry.go:31] will retry after 381.454236ms: missing components: kube-dns
	I1101 09:20:30.566126  243720 system_pods.go:86] 8 kube-system pods found
	I1101 09:20:30.566168  243720 system_pods.go:89] "coredns-66bc5c9577-wwvth" [f1a303c9-2007-4eb4-a08b-c3ea11570c07] Running
	I1101 09:20:30.566177  243720 system_pods.go:89] "etcd-embed-certs-236314" [2339fae7-be07-4318-b821-1b3a5047f474] Running
	I1101 09:20:30.566182  243720 system_pods.go:89] "kindnet-mf8mj" [b2c42bdc-41df-4851-8d25-6810e5020f41] Running
	I1101 09:20:30.566188  243720 system_pods.go:89] "kube-apiserver-embed-certs-236314" [52806dfc-de35-42a4-ba6a-60ba286ebc38] Running
	I1101 09:20:30.566196  243720 system_pods.go:89] "kube-controller-manager-embed-certs-236314" [785690ed-0fec-4e88-8063-6f52dbebf80c] Running
	I1101 09:20:30.566200  243720 system_pods.go:89] "kube-proxy-55ft8" [57a06788-c25e-43a7-9c69-158766d4b46b] Running
	I1101 09:20:30.566213  243720 system_pods.go:89] "kube-scheduler-embed-certs-236314" [f7e5af2e-cce6-470c-b8a5-9e241bfa9a94] Running
	I1101 09:20:30.566218  243720 system_pods.go:89] "storage-provisioner" [5cdd98f3-13ee-4dff-be42-c5c0686106d4] Running
	I1101 09:20:30.566228  243720 system_pods.go:126] duration metric: took 588.078687ms to wait for k8s-apps to be running ...
	I1101 09:20:30.566238  243720 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:20:30.566296  243720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:20:30.582659  243720 system_svc.go:56] duration metric: took 16.410097ms WaitForService to wait for kubelet
	I1101 09:20:30.582728  243720 kubeadm.go:587] duration metric: took 11.504904437s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:20:30.582759  243720 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:20:30.585958  243720 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:20:30.586003  243720 node_conditions.go:123] node cpu capacity is 8
	I1101 09:20:30.586024  243720 node_conditions.go:105] duration metric: took 3.259045ms to run NodePressure ...
	I1101 09:20:30.586041  243720 start.go:242] waiting for startup goroutines ...
	I1101 09:20:30.586055  243720 start.go:247] waiting for cluster config update ...
	I1101 09:20:30.586071  243720 start.go:256] writing updated cluster config ...
	I1101 09:20:30.586390  243720 ssh_runner.go:195] Run: rm -f paused
	I1101 09:20:30.590240  243720 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:20:30.595399  243720 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wwvth" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:20:30.600623  243720 pod_ready.go:94] pod "coredns-66bc5c9577-wwvth" is "Ready"
	I1101 09:20:30.600650  243720 pod_ready.go:86] duration metric: took 5.2202ms for pod "coredns-66bc5c9577-wwvth" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:20:30.602813  243720 pod_ready.go:83] waiting for pod "etcd-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:20:30.607750  243720 pod_ready.go:94] pod "etcd-embed-certs-236314" is "Ready"
	I1101 09:20:30.607833  243720 pod_ready.go:86] duration metric: took 4.992892ms for pod "etcd-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:20:30.610721  243720 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:20:30.615552  243720 pod_ready.go:94] pod "kube-apiserver-embed-certs-236314" is "Ready"
	I1101 09:20:30.615578  243720 pod_ready.go:86] duration metric: took 4.830676ms for pod "kube-apiserver-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:20:30.618437  243720 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:20:30.994974  243720 pod_ready.go:94] pod "kube-controller-manager-embed-certs-236314" is "Ready"
	I1101 09:20:30.995000  243720 pod_ready.go:86] duration metric: took 376.535692ms for pod "kube-controller-manager-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:20:31.195164  243720 pod_ready.go:83] waiting for pod "kube-proxy-55ft8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:20:31.595932  243720 pod_ready.go:94] pod "kube-proxy-55ft8" is "Ready"
	I1101 09:20:31.595968  243720 pod_ready.go:86] duration metric: took 400.777533ms for pod "kube-proxy-55ft8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:20:31.795539  243720 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:20:32.194483  243720 pod_ready.go:94] pod "kube-scheduler-embed-certs-236314" is "Ready"
	I1101 09:20:32.194517  243720 pod_ready.go:86] duration metric: took 398.949801ms for pod "kube-scheduler-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:20:32.194532  243720 pod_ready.go:40] duration metric: took 1.604255892s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:20:32.240716  243720 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:20:32.242490  243720 out.go:179] * Done! kubectl is now configured to use "embed-certs-236314" cluster and "default" namespace by default
	W1101 09:20:28.758999  245059 pod_ready.go:104] pod "coredns-5dd5756b68-gcvgr" is not "Ready", error: <nil>
	W1101 09:20:31.258617  245059 pod_ready.go:104] pod "coredns-5dd5756b68-gcvgr" is not "Ready", error: <nil>
	W1101 09:20:29.667792  248920 pod_ready.go:104] pod "coredns-66bc5c9577-z5578" is not "Ready", error: <nil>
	W1101 09:20:31.667889  248920 pod_ready.go:104] pod "coredns-66bc5c9577-z5578" is not "Ready", error: <nil>
	I1101 09:20:30.316079  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:20:30.316569  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:20:30.316633  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:20:30.316702  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:20:30.350643  216020 cri.go:89] found id: "f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:30.350667  216020 cri.go:89] found id: ""
	I1101 09:20:30.350676  216020 logs.go:282] 1 containers: [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2]
	I1101 09:20:30.350736  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:30.354985  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:20:30.355059  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:20:30.383712  216020 cri.go:89] found id: ""
	I1101 09:20:30.383738  216020 logs.go:282] 0 containers: []
	W1101 09:20:30.383748  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:20:30.383755  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:20:30.383815  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:20:30.415596  216020 cri.go:89] found id: ""
	I1101 09:20:30.415639  216020 logs.go:282] 0 containers: []
	W1101 09:20:30.415651  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:20:30.415660  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:20:30.415720  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:20:30.444684  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:20:30.444704  216020 cri.go:89] found id: ""
	I1101 09:20:30.444711  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:20:30.444769  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:30.449144  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:20:30.449220  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:20:30.477678  216020 cri.go:89] found id: ""
	I1101 09:20:30.477709  216020 logs.go:282] 0 containers: []
	W1101 09:20:30.477721  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:20:30.477728  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:20:30.477796  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:20:30.507492  216020 cri.go:89] found id: "df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:20:30.507519  216020 cri.go:89] found id: "e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677"
	I1101 09:20:30.507525  216020 cri.go:89] found id: ""
	I1101 09:20:30.507538  216020 logs.go:282] 2 containers: [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677]
	I1101 09:20:30.507597  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:30.511816  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:30.516100  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:20:30.516165  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:20:30.546093  216020 cri.go:89] found id: ""
	I1101 09:20:30.546122  216020 logs.go:282] 0 containers: []
	W1101 09:20:30.546130  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:20:30.546136  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:20:30.546188  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:20:30.577237  216020 cri.go:89] found id: ""
	I1101 09:20:30.577263  216020 logs.go:282] 0 containers: []
	W1101 09:20:30.577275  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:20:30.577290  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:20:30.577303  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:20:30.595185  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:20:30.595229  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:20:30.648646  216020 logs.go:123] Gathering logs for kube-controller-manager [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd] ...
	I1101 09:20:30.648686  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:20:30.677569  216020 logs.go:123] Gathering logs for kube-controller-manager [e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677] ...
	I1101 09:20:30.677600  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 e12b6f88db2ee59fe145026094d5a5de34732fe6f41de9f534b1a2f1a975b677"
	I1101 09:20:30.706693  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:20:30.706724  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:20:30.762581  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:20:30.762615  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:20:30.848404  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:20:30.848438  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:20:30.908344  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:20:30.908368  216020 logs.go:123] Gathering logs for kube-apiserver [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2] ...
	I1101 09:20:30.908383  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:30.941078  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:20:30.941109  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:20:33.479254  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:20:33.479692  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:20:33.479737  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:20:33.479790  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:20:33.513657  216020 cri.go:89] found id: "f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:33.513696  216020 cri.go:89] found id: ""
	I1101 09:20:33.513707  216020 logs.go:282] 1 containers: [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2]
	I1101 09:20:33.513777  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:33.519031  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:20:33.519109  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:20:33.550606  216020 cri.go:89] found id: ""
	I1101 09:20:33.550636  216020 logs.go:282] 0 containers: []
	W1101 09:20:33.550649  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:20:33.550656  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:20:33.550715  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:20:33.582639  216020 cri.go:89] found id: ""
	I1101 09:20:33.582671  216020 logs.go:282] 0 containers: []
	W1101 09:20:33.582682  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:20:33.582689  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:20:33.582747  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:20:33.613948  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:20:33.613970  216020 cri.go:89] found id: ""
	I1101 09:20:33.613977  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:20:33.614025  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:33.618448  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:20:33.618552  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:20:33.646810  216020 cri.go:89] found id: ""
	I1101 09:20:33.646835  216020 logs.go:282] 0 containers: []
	W1101 09:20:33.646843  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:20:33.646854  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:20:33.646927  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:20:33.675852  216020 cri.go:89] found id: "df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:20:33.675910  216020 cri.go:89] found id: ""
	I1101 09:20:33.675921  216020 logs.go:282] 1 containers: [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd]
	I1101 09:20:33.675976  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:33.679944  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:20:33.680013  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:20:33.707976  216020 cri.go:89] found id: ""
	I1101 09:20:33.708002  216020 logs.go:282] 0 containers: []
	W1101 09:20:33.708010  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:20:33.708016  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:20:33.708072  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:20:33.735533  216020 cri.go:89] found id: ""
	I1101 09:20:33.735556  216020 logs.go:282] 0 containers: []
	W1101 09:20:33.735563  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:20:33.735571  216020 logs.go:123] Gathering logs for kube-controller-manager [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd] ...
	I1101 09:20:33.735586  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:20:33.763182  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:20:33.763207  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:20:33.816929  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:20:33.816976  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:20:33.849233  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:20:33.849260  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:20:33.936422  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:20:33.936456  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:20:33.952895  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:20:33.952924  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:20:34.012203  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:20:34.012225  216020 logs.go:123] Gathering logs for kube-apiserver [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2] ...
	I1101 09:20:34.012238  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:34.048790  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:20:34.048826  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	W1101 09:20:33.758452  245059 pod_ready.go:104] pod "coredns-5dd5756b68-gcvgr" is not "Ready", error: <nil>
	W1101 09:20:36.257461  245059 pod_ready.go:104] pod "coredns-5dd5756b68-gcvgr" is not "Ready", error: <nil>
	W1101 09:20:33.668613  248920 pod_ready.go:104] pod "coredns-66bc5c9577-z5578" is not "Ready", error: <nil>
	W1101 09:20:36.168455  248920 pod_ready.go:104] pod "coredns-66bc5c9577-z5578" is not "Ready", error: <nil>
	I1101 09:20:36.598932  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:20:36.599339  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:20:36.599391  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:20:36.599441  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:20:36.628328  216020 cri.go:89] found id: "f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:36.628351  216020 cri.go:89] found id: ""
	I1101 09:20:36.628359  216020 logs.go:282] 1 containers: [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2]
	I1101 09:20:36.628405  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:36.632475  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:20:36.632552  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:20:36.660472  216020 cri.go:89] found id: ""
	I1101 09:20:36.660496  216020 logs.go:282] 0 containers: []
	W1101 09:20:36.660506  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:20:36.660513  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:20:36.660572  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:20:36.689813  216020 cri.go:89] found id: ""
	I1101 09:20:36.689836  216020 logs.go:282] 0 containers: []
	W1101 09:20:36.689844  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:20:36.689849  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:20:36.689920  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:20:36.719146  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:20:36.719170  216020 cri.go:89] found id: ""
	I1101 09:20:36.719179  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:20:36.719252  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:36.723428  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:20:36.723493  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:20:36.752700  216020 cri.go:89] found id: ""
	I1101 09:20:36.752730  216020 logs.go:282] 0 containers: []
	W1101 09:20:36.752753  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:20:36.752764  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:20:36.752819  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:20:36.782237  216020 cri.go:89] found id: "df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:20:36.782256  216020 cri.go:89] found id: ""
	I1101 09:20:36.782264  216020 logs.go:282] 1 containers: [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd]
	I1101 09:20:36.782318  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:36.786664  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:20:36.786734  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:20:36.814671  216020 cri.go:89] found id: ""
	I1101 09:20:36.814695  216020 logs.go:282] 0 containers: []
	W1101 09:20:36.814703  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:20:36.814708  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:20:36.814761  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:20:36.843368  216020 cri.go:89] found id: ""
	I1101 09:20:36.843394  216020 logs.go:282] 0 containers: []
	W1101 09:20:36.843401  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:20:36.843414  216020 logs.go:123] Gathering logs for kube-apiserver [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2] ...
	I1101 09:20:36.843425  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:36.876378  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:20:36.876411  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:20:36.926217  216020 logs.go:123] Gathering logs for kube-controller-manager [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd] ...
	I1101 09:20:36.926264  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:20:36.954440  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:20:36.954465  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:20:37.014327  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:20:37.014368  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:20:37.049924  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:20:37.049978  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:20:37.136518  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:20:37.136556  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:20:37.153310  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:20:37.153354  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:20:37.213128  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:20:39.714753  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:20:39.715242  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:20:39.715295  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:20:39.715361  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:20:39.744822  216020 cri.go:89] found id: "f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:20:39.744845  216020 cri.go:89] found id: ""
	I1101 09:20:39.744858  216020 logs.go:282] 1 containers: [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2]
	I1101 09:20:39.744929  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:20:39.749218  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:20:39.749295  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:20:39.777970  216020 cri.go:89] found id: ""
	I1101 09:20:39.778001  216020 logs.go:282] 0 containers: []
	W1101 09:20:39.778013  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:20:39.778020  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:20:39.778080  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:20:39.807189  216020 cri.go:89] found id: ""
	I1101 09:20:39.807222  216020 logs.go:282] 0 containers: []
	W1101 09:20:39.807234  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:20:39.807243  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:20:39.807308  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	
	
	==> CRI-O <==
	Nov 01 09:20:30 embed-certs-236314 crio[773]: time="2025-11-01T09:20:30.164118295Z" level=info msg="Starting container: 013df4b6bc5faddc21afc99d9ef41c7a428dca46daa741738609482347119cd6" id=0d76f568-c499-4cf1-9e52-b8b7d8ce6bbc name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:20:30 embed-certs-236314 crio[773]: time="2025-11-01T09:20:30.166524482Z" level=info msg="Started container" PID=1871 containerID=013df4b6bc5faddc21afc99d9ef41c7a428dca46daa741738609482347119cd6 description=kube-system/coredns-66bc5c9577-wwvth/coredns id=0d76f568-c499-4cf1-9e52-b8b7d8ce6bbc name=/runtime.v1.RuntimeService/StartContainer sandboxID=0757a3b767c212553ba65fc563e746709cfbc5140ce945e9f594248b8ca369e2
	Nov 01 09:20:32 embed-certs-236314 crio[773]: time="2025-11-01T09:20:32.718135091Z" level=info msg="Running pod sandbox: default/busybox/POD" id=e12dfd2f-bcc2-4d28-80fa-96b7d1342118 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:20:32 embed-certs-236314 crio[773]: time="2025-11-01T09:20:32.71825187Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:32 embed-certs-236314 crio[773]: time="2025-11-01T09:20:32.724472726Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:84cb60cad0b12655863d063b08343d0238dfd94f6b914d08fe061e09d53c4f40 UID:6e751a41-58d1-4511-8037-a88d0dc71611 NetNS:/var/run/netns/b288581b-f09c-4b1d-b154-f101fbd09cf9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00027e548}] Aliases:map[]}"
	Nov 01 09:20:32 embed-certs-236314 crio[773]: time="2025-11-01T09:20:32.724507057Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:20:32 embed-certs-236314 crio[773]: time="2025-11-01T09:20:32.735010397Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:84cb60cad0b12655863d063b08343d0238dfd94f6b914d08fe061e09d53c4f40 UID:6e751a41-58d1-4511-8037-a88d0dc71611 NetNS:/var/run/netns/b288581b-f09c-4b1d-b154-f101fbd09cf9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00027e548}] Aliases:map[]}"
	Nov 01 09:20:32 embed-certs-236314 crio[773]: time="2025-11-01T09:20:32.735165703Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 09:20:32 embed-certs-236314 crio[773]: time="2025-11-01T09:20:32.736069174Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:20:32 embed-certs-236314 crio[773]: time="2025-11-01T09:20:32.736896139Z" level=info msg="Ran pod sandbox 84cb60cad0b12655863d063b08343d0238dfd94f6b914d08fe061e09d53c4f40 with infra container: default/busybox/POD" id=e12dfd2f-bcc2-4d28-80fa-96b7d1342118 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:20:32 embed-certs-236314 crio[773]: time="2025-11-01T09:20:32.738350219Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=369f3498-6063-4f60-87f0-1121a076a118 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:32 embed-certs-236314 crio[773]: time="2025-11-01T09:20:32.738495266Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=369f3498-6063-4f60-87f0-1121a076a118 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:32 embed-certs-236314 crio[773]: time="2025-11-01T09:20:32.738527216Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=369f3498-6063-4f60-87f0-1121a076a118 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:32 embed-certs-236314 crio[773]: time="2025-11-01T09:20:32.739327722Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=34313695-deca-4104-8ab7-5b83a2005590 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:20:32 embed-certs-236314 crio[773]: time="2025-11-01T09:20:32.741181731Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 09:20:33 embed-certs-236314 crio[773]: time="2025-11-01T09:20:33.548772923Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=34313695-deca-4104-8ab7-5b83a2005590 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:20:33 embed-certs-236314 crio[773]: time="2025-11-01T09:20:33.549604428Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a0926088-8ae9-4798-a854-219255cdab94 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:33 embed-certs-236314 crio[773]: time="2025-11-01T09:20:33.551286419Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8101ae99-ca62-49ca-a2d3-fb4d3e37940e name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:33 embed-certs-236314 crio[773]: time="2025-11-01T09:20:33.555200053Z" level=info msg="Creating container: default/busybox/busybox" id=eabca65e-231d-4e5e-840d-04889a4543ae name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:33 embed-certs-236314 crio[773]: time="2025-11-01T09:20:33.555316239Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:33 embed-certs-236314 crio[773]: time="2025-11-01T09:20:33.559253769Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:33 embed-certs-236314 crio[773]: time="2025-11-01T09:20:33.55977725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:33 embed-certs-236314 crio[773]: time="2025-11-01T09:20:33.588844961Z" level=info msg="Created container f05dc070345578457936edb259c7c710fb13e2a0a5f4e52e71608449e503dac2: default/busybox/busybox" id=eabca65e-231d-4e5e-840d-04889a4543ae name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:33 embed-certs-236314 crio[773]: time="2025-11-01T09:20:33.589581152Z" level=info msg="Starting container: f05dc070345578457936edb259c7c710fb13e2a0a5f4e52e71608449e503dac2" id=9684ed31-72f8-4432-9943-719ebe966edc name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:20:33 embed-certs-236314 crio[773]: time="2025-11-01T09:20:33.591430848Z" level=info msg="Started container" PID=1943 containerID=f05dc070345578457936edb259c7c710fb13e2a0a5f4e52e71608449e503dac2 description=default/busybox/busybox id=9684ed31-72f8-4432-9943-719ebe966edc name=/runtime.v1.RuntimeService/StartContainer sandboxID=84cb60cad0b12655863d063b08343d0238dfd94f6b914d08fe061e09d53c4f40
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	f05dc07034557       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   84cb60cad0b12       busybox                                      default
	013df4b6bc5fa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   0757a3b767c21       coredns-66bc5c9577-wwvth                     kube-system
	f42f7d7f74c51       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   14711c24c6373       storage-provisioner                          kube-system
	f6e2198408462       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      22 seconds ago      Running             kube-proxy                0                   b19b09988a83d       kube-proxy-55ft8                             kube-system
	624bcf5d92711       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      22 seconds ago      Running             kindnet-cni               0                   2de3ddb00ee2c       kindnet-mf8mj                                kube-system
	3a7cd0f981f8e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      32 seconds ago      Running             kube-apiserver            0                   8fed631fc8cfd       kube-apiserver-embed-certs-236314            kube-system
	1984cd50ecf81       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      32 seconds ago      Running             etcd                      0                   e49f4365e64d0       etcd-embed-certs-236314                      kube-system
	a5af9c90b5485       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      32 seconds ago      Running             kube-controller-manager   0                   1fff251bd11db       kube-controller-manager-embed-certs-236314   kube-system
	13f9e761da2a3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      32 seconds ago      Running             kube-scheduler            0                   07693d8a65728       kube-scheduler-embed-certs-236314            kube-system
	
	
	==> coredns [013df4b6bc5faddc21afc99d9ef41c7a428dca46daa741738609482347119cd6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58696 - 55371 "HINFO IN 7496794946614049204.3058877408871238336. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.131613829s
	
	
	==> describe nodes <==
	Name:               embed-certs-236314
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-236314
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=embed-certs-236314
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_20_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:20:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-236314
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:20:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:20:29 +0000   Sat, 01 Nov 2025 09:20:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:20:29 +0000   Sat, 01 Nov 2025 09:20:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:20:29 +0000   Sat, 01 Nov 2025 09:20:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:20:29 +0000   Sat, 01 Nov 2025 09:20:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-236314
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                dee9e247-3614-413a-be45-584e8f9ead09
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-wwvth                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-embed-certs-236314                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-mf8mj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-embed-certs-236314             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-embed-certs-236314    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-55ft8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-embed-certs-236314             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  Starting                 33s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s (x8 over 33s)  kubelet          Node embed-certs-236314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s (x8 over 33s)  kubelet          Node embed-certs-236314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s (x8 over 33s)  kubelet          Node embed-certs-236314 status is now: NodeHasSufficientPID
	  Normal  Starting                 28s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s                kubelet          Node embed-certs-236314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s                kubelet          Node embed-certs-236314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s                kubelet          Node embed-certs-236314 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s                node-controller  Node embed-certs-236314 event: Registered Node embed-certs-236314 in Controller
	  Normal  NodeReady                12s                kubelet          Node embed-certs-236314 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.047726] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[Nov 1 08:38] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.215674] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.047939] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000023] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023915] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023867] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023880] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000033] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023884] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +4.031524] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +8.255123] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	
	
	==> etcd [1984cd50ecf816a62a8d7a9154484a68decf54a4727ab9d5149012566b3081c5] <==
	{"level":"warn","ts":"2025-11-01T09:20:10.079077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.086012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.094676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.101414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.108236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.116322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.123089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.129858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.136741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.143969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.151994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.158382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.164807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.172305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.179763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.187993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.197822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.205238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.211784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.218697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.225067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.231672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.257517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.264792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:10.272558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41812","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:20:42 up  1:03,  0 user,  load average: 3.52, 2.59, 1.54
	Linux embed-certs-236314 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [624bcf5d927115c7533b7e23e901ee93c39cfdf4ed487a1fe776285f80e5f9dc] <==
	I1101 09:20:19.056830       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:20:19.150041       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 09:20:19.150245       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:20:19.150271       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:20:19.150288       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:20:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:20:19.451949       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:20:19.451989       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:20:19.453209       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:20:19.452011       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:20:19.652270       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:20:19.652301       1 metrics.go:72] Registering metrics
	I1101 09:20:19.652356       1 controller.go:711] "Syncing nftables rules"
	I1101 09:20:29.451382       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:20:29.451451       1 main.go:301] handling current node
	I1101 09:20:39.455026       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:20:39.455073       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3a7cd0f981f8edc4286a6babe1789d1eeed6e4f7da9ca6931f8af4c7c5326ed5] <==
	I1101 09:20:10.871178       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1101 09:20:10.873636       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 09:20:10.873661       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:20:10.876911       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:20:10.877156       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:20:10.882096       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 09:20:11.045679       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:20:11.751259       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:20:11.755412       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:20:11.755444       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:20:12.259720       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:20:12.297853       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:20:12.355539       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:20:12.362730       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1101 09:20:12.363857       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:20:12.368291       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:20:12.781625       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:20:13.487945       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:20:13.501179       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:20:13.509308       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:20:18.534029       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1101 09:20:18.586547       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:20:18.591207       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:20:18.885168       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1101 09:20:40.505516       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:38426: use of closed network connection
	
	
	==> kube-controller-manager [a5af9c90b5485a60daf47c88f7c9164faa295be1a03840cf48eca30831a7727c] <==
	I1101 09:20:17.781390       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:20:17.781493       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:20:17.781524       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:20:17.781785       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:20:17.782100       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:20:17.782199       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:20:17.782327       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:20:17.782341       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:20:17.782357       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:20:17.782606       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:20:17.782631       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:20:17.784823       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:20:17.787153       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:20:17.787212       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:20:17.787279       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:20:17.787287       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:20:17.787294       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:20:17.792489       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:20:17.792566       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:20:17.796096       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-236314" podCIDRs=["10.244.0.0/24"]
	I1101 09:20:17.799300       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:20:17.806526       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:20:17.811856       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:20:17.821104       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 09:20:32.737059       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f6e2198408462e5ec76396f8dce4bc4b99965294e888ec17118c2eeb26fd81a7] <==
	I1101 09:20:18.970601       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:20:19.043323       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:20:19.144140       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:20:19.144321       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 09:20:19.145155       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:20:19.208745       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:20:19.209366       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:20:19.228465       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:20:19.229354       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:20:19.229529       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:20:19.235602       1 config.go:309] "Starting node config controller"
	I1101 09:20:19.235715       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:20:19.235763       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:20:19.235785       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:20:19.235822       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:20:19.236161       1 config.go:200] "Starting service config controller"
	I1101 09:20:19.236182       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:20:19.236202       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:20:19.236208       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:20:19.336318       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:20:19.336378       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:20:19.336396       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [13f9e761da2a30c15bab7b9db8296b1e3267cbbed98aca7c6b68fd2f2be0118e] <==
	E1101 09:20:10.807421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:20:10.807541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:20:10.807879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:20:10.807932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:20:10.807975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:20:10.808071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:20:10.808144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:20:10.808324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:20:10.808675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:20:10.808675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:20:10.808746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:20:10.808969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:20:10.809152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:20:11.650793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:20:11.679975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:20:11.695572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:20:11.738205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:20:11.748573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:20:11.757806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:20:11.896076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:20:11.900134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:20:11.951711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:20:12.059254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 09:20:12.093647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1101 09:20:14.201338       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:20:14 embed-certs-236314 kubelet[1338]: E1101 09:20:14.367187    1338 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-embed-certs-236314\" already exists" pod="kube-system/kube-scheduler-embed-certs-236314"
	Nov 01 09:20:14 embed-certs-236314 kubelet[1338]: I1101 09:20:14.372231    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-236314" podStartSLOduration=1.372208158 podStartE2EDuration="1.372208158s" podCreationTimestamp="2025-11-01 09:20:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:20:14.361486285 +0000 UTC m=+1.113626178" watchObservedRunningTime="2025-11-01 09:20:14.372208158 +0000 UTC m=+1.124348044"
	Nov 01 09:20:14 embed-certs-236314 kubelet[1338]: I1101 09:20:14.372367    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-236314" podStartSLOduration=1.372360469 podStartE2EDuration="1.372360469s" podCreationTimestamp="2025-11-01 09:20:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:20:14.372142095 +0000 UTC m=+1.124281987" watchObservedRunningTime="2025-11-01 09:20:14.372360469 +0000 UTC m=+1.124500359"
	Nov 01 09:20:14 embed-certs-236314 kubelet[1338]: I1101 09:20:14.394930    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-236314" podStartSLOduration=1.394908048 podStartE2EDuration="1.394908048s" podCreationTimestamp="2025-11-01 09:20:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:20:14.382890113 +0000 UTC m=+1.135030002" watchObservedRunningTime="2025-11-01 09:20:14.394908048 +0000 UTC m=+1.147047935"
	Nov 01 09:20:17 embed-certs-236314 kubelet[1338]: I1101 09:20:17.888425    1338 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 09:20:17 embed-certs-236314 kubelet[1338]: I1101 09:20:17.889293    1338 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 09:20:18 embed-certs-236314 kubelet[1338]: I1101 09:20:18.658930    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57a06788-c25e-43a7-9c69-158766d4b46b-lib-modules\") pod \"kube-proxy-55ft8\" (UID: \"57a06788-c25e-43a7-9c69-158766d4b46b\") " pod="kube-system/kube-proxy-55ft8"
	Nov 01 09:20:18 embed-certs-236314 kubelet[1338]: I1101 09:20:18.658991    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b2c42bdc-41df-4851-8d25-6810e5020f41-cni-cfg\") pod \"kindnet-mf8mj\" (UID: \"b2c42bdc-41df-4851-8d25-6810e5020f41\") " pod="kube-system/kindnet-mf8mj"
	Nov 01 09:20:18 embed-certs-236314 kubelet[1338]: I1101 09:20:18.659011    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2c42bdc-41df-4851-8d25-6810e5020f41-lib-modules\") pod \"kindnet-mf8mj\" (UID: \"b2c42bdc-41df-4851-8d25-6810e5020f41\") " pod="kube-system/kindnet-mf8mj"
	Nov 01 09:20:18 embed-certs-236314 kubelet[1338]: I1101 09:20:18.659056    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52pkf\" (UniqueName: \"kubernetes.io/projected/57a06788-c25e-43a7-9c69-158766d4b46b-kube-api-access-52pkf\") pod \"kube-proxy-55ft8\" (UID: \"57a06788-c25e-43a7-9c69-158766d4b46b\") " pod="kube-system/kube-proxy-55ft8"
	Nov 01 09:20:18 embed-certs-236314 kubelet[1338]: I1101 09:20:18.659145    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2c42bdc-41df-4851-8d25-6810e5020f41-xtables-lock\") pod \"kindnet-mf8mj\" (UID: \"b2c42bdc-41df-4851-8d25-6810e5020f41\") " pod="kube-system/kindnet-mf8mj"
	Nov 01 09:20:18 embed-certs-236314 kubelet[1338]: I1101 09:20:18.659169    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj48v\" (UniqueName: \"kubernetes.io/projected/b2c42bdc-41df-4851-8d25-6810e5020f41-kube-api-access-fj48v\") pod \"kindnet-mf8mj\" (UID: \"b2c42bdc-41df-4851-8d25-6810e5020f41\") " pod="kube-system/kindnet-mf8mj"
	Nov 01 09:20:18 embed-certs-236314 kubelet[1338]: I1101 09:20:18.659196    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/57a06788-c25e-43a7-9c69-158766d4b46b-kube-proxy\") pod \"kube-proxy-55ft8\" (UID: \"57a06788-c25e-43a7-9c69-158766d4b46b\") " pod="kube-system/kube-proxy-55ft8"
	Nov 01 09:20:18 embed-certs-236314 kubelet[1338]: I1101 09:20:18.659225    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57a06788-c25e-43a7-9c69-158766d4b46b-xtables-lock\") pod \"kube-proxy-55ft8\" (UID: \"57a06788-c25e-43a7-9c69-158766d4b46b\") " pod="kube-system/kube-proxy-55ft8"
	Nov 01 09:20:19 embed-certs-236314 kubelet[1338]: I1101 09:20:19.392634    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-55ft8" podStartSLOduration=1.39261187 podStartE2EDuration="1.39261187s" podCreationTimestamp="2025-11-01 09:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:20:19.392535488 +0000 UTC m=+6.144675380" watchObservedRunningTime="2025-11-01 09:20:19.39261187 +0000 UTC m=+6.144751764"
	Nov 01 09:20:24 embed-certs-236314 kubelet[1338]: I1101 09:20:24.056423    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-mf8mj" podStartSLOduration=6.056397452 podStartE2EDuration="6.056397452s" podCreationTimestamp="2025-11-01 09:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:20:19.429635585 +0000 UTC m=+6.181775478" watchObservedRunningTime="2025-11-01 09:20:24.056397452 +0000 UTC m=+10.808537343"
	Nov 01 09:20:29 embed-certs-236314 kubelet[1338]: I1101 09:20:29.780398    1338 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 09:20:29 embed-certs-236314 kubelet[1338]: I1101 09:20:29.843515    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1a303c9-2007-4eb4-a08b-c3ea11570c07-config-volume\") pod \"coredns-66bc5c9577-wwvth\" (UID: \"f1a303c9-2007-4eb4-a08b-c3ea11570c07\") " pod="kube-system/coredns-66bc5c9577-wwvth"
	Nov 01 09:20:29 embed-certs-236314 kubelet[1338]: I1101 09:20:29.843573    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hftqx\" (UniqueName: \"kubernetes.io/projected/f1a303c9-2007-4eb4-a08b-c3ea11570c07-kube-api-access-hftqx\") pod \"coredns-66bc5c9577-wwvth\" (UID: \"f1a303c9-2007-4eb4-a08b-c3ea11570c07\") " pod="kube-system/coredns-66bc5c9577-wwvth"
	Nov 01 09:20:29 embed-certs-236314 kubelet[1338]: I1101 09:20:29.843651    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5cdd98f3-13ee-4dff-be42-c5c0686106d4-tmp\") pod \"storage-provisioner\" (UID: \"5cdd98f3-13ee-4dff-be42-c5c0686106d4\") " pod="kube-system/storage-provisioner"
	Nov 01 09:20:29 embed-certs-236314 kubelet[1338]: I1101 09:20:29.843693    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58s48\" (UniqueName: \"kubernetes.io/projected/5cdd98f3-13ee-4dff-be42-c5c0686106d4-kube-api-access-58s48\") pod \"storage-provisioner\" (UID: \"5cdd98f3-13ee-4dff-be42-c5c0686106d4\") " pod="kube-system/storage-provisioner"
	Nov 01 09:20:30 embed-certs-236314 kubelet[1338]: I1101 09:20:30.454973    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.454934435 podStartE2EDuration="11.454934435s" podCreationTimestamp="2025-11-01 09:20:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:20:30.454803766 +0000 UTC m=+17.206943663" watchObservedRunningTime="2025-11-01 09:20:30.454934435 +0000 UTC m=+17.207074328"
	Nov 01 09:20:30 embed-certs-236314 kubelet[1338]: I1101 09:20:30.532665    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wwvth" podStartSLOduration=12.532623589 podStartE2EDuration="12.532623589s" podCreationTimestamp="2025-11-01 09:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:20:30.532340307 +0000 UTC m=+17.284480193" watchObservedRunningTime="2025-11-01 09:20:30.532623589 +0000 UTC m=+17.284763480"
	Nov 01 09:20:32 embed-certs-236314 kubelet[1338]: I1101 09:20:32.462031    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55jzf\" (UniqueName: \"kubernetes.io/projected/6e751a41-58d1-4511-8037-a88d0dc71611-kube-api-access-55jzf\") pod \"busybox\" (UID: \"6e751a41-58d1-4511-8037-a88d0dc71611\") " pod="default/busybox"
	Nov 01 09:20:34 embed-certs-236314 kubelet[1338]: I1101 09:20:34.429812    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.6180263030000002 podStartE2EDuration="2.429787745s" podCreationTimestamp="2025-11-01 09:20:32 +0000 UTC" firstStartedPulling="2025-11-01 09:20:32.738844235 +0000 UTC m=+19.490984109" lastFinishedPulling="2025-11-01 09:20:33.550605678 +0000 UTC m=+20.302745551" observedRunningTime="2025-11-01 09:20:34.429555572 +0000 UTC m=+21.181695465" watchObservedRunningTime="2025-11-01 09:20:34.429787745 +0000 UTC m=+21.181927638"
	
	
	==> storage-provisioner [f42f7d7f74c512df1d6f1000d2b060ab0eedd9b06d5b4cc885dcacb33a5fd3d5] <==
	I1101 09:20:30.165002       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:20:30.174891       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:20:30.174958       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:20:30.178209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:30.182975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:20:30.183124       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:20:30.183264       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c262ede2-8436-41d8-b457-ce72d2fd66a5", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-236314_2999a755-3427-4002-ac77-819fd4579187 became leader
	I1101 09:20:30.183306       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-236314_2999a755-3427-4002-ac77-819fd4579187!
	W1101 09:20:30.185553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:30.189307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:20:30.283609       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-236314_2999a755-3427-4002-ac77-819fd4579187!
	W1101 09:20:32.192726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:32.199044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:34.202089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:34.207599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:36.210878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:36.219381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:38.222328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:38.226553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:40.229828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:40.234985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:42.237818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:42.242329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-236314 -n embed-certs-236314
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-236314 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-152344 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-152344 --alsologtostderr -v=1: exit status 80 (2.505613332s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-152344 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:21:00.462599  256846 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:21:00.462729  256846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:21:00.462743  256846 out.go:374] Setting ErrFile to fd 2...
	I1101 09:21:00.462748  256846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:21:00.462978  256846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:21:00.463216  256846 out.go:368] Setting JSON to false
	I1101 09:21:00.463238  256846 mustload.go:66] Loading cluster: old-k8s-version-152344
	I1101 09:21:00.463587  256846 config.go:182] Loaded profile config "old-k8s-version-152344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:21:00.464015  256846 cli_runner.go:164] Run: docker container inspect old-k8s-version-152344 --format={{.State.Status}}
	I1101 09:21:00.483720  256846 host.go:66] Checking if "old-k8s-version-152344" exists ...
	I1101 09:21:00.484075  256846 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:21:00.546033  256846 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-01 09:21:00.534973314 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:21:00.546691  256846 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-152344 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 09:21:00.548680  256846 out.go:179] * Pausing node old-k8s-version-152344 ... 
	I1101 09:21:00.549923  256846 host.go:66] Checking if "old-k8s-version-152344" exists ...
	I1101 09:21:00.550228  256846 ssh_runner.go:195] Run: systemctl --version
	I1101 09:21:00.550293  256846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-152344
	I1101 09:21:00.569669  256846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/old-k8s-version-152344/id_rsa Username:docker}
	I1101 09:21:00.670074  256846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:21:00.702792  256846 pause.go:52] kubelet running: true
	I1101 09:21:00.702890  256846 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:21:00.872919  256846 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:21:00.873041  256846 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:21:00.941303  256846 cri.go:89] found id: "77f34a751546cae898ab472655f56c5c32eccc085038636d42ef139d6abf16c6"
	I1101 09:21:00.941332  256846 cri.go:89] found id: "4c720398bb4fa757d679305ceeef216ff753830d7ac6b7c1e25b129c46813c05"
	I1101 09:21:00.941338  256846 cri.go:89] found id: "5b6c9b666836f422d8078161076c25e4166a0975599c4a826a525cd241adf6aa"
	I1101 09:21:00.941343  256846 cri.go:89] found id: "0e646a0053e644e0c8aaf75d4f21ac7cb97c0c1c5004411b54aea626f1cb6948"
	I1101 09:21:00.941348  256846 cri.go:89] found id: "a4a5247c122cf0b4dc4fa5a348dbacca1caae7a3afc1116e75392ae7a3ec6dd2"
	I1101 09:21:00.941355  256846 cri.go:89] found id: "b3160f6c872bd74ec717f3bdc6505f4d22b0d84f6f4142f3e0529df91607f430"
	I1101 09:21:00.941359  256846 cri.go:89] found id: "872d88871004564b9fec7c298dff556377a0ff948066c78e7c1b182d8b6271f4"
	I1101 09:21:00.941363  256846 cri.go:89] found id: "a606cb94dcd3aedf1e555a4a15b3aa6ab1c1fbfea9bfe61bea7fb5777b59cb5a"
	I1101 09:21:00.941367  256846 cri.go:89] found id: "e85dd09f3807f74307934a4f301776d16ac8f60eb9cc757e6f6bf2553901baef"
	I1101 09:21:00.941378  256846 cri.go:89] found id: "8270b89462e20b129ac3e5fdb15f4995bd39adc8b8af28d2aa2bfb55eb070be9"
	I1101 09:21:00.941386  256846 cri.go:89] found id: "6cd6c8363476bd0a34deef513a204c6da2611045224adb8efc65c2778e33c742"
	I1101 09:21:00.941391  256846 cri.go:89] found id: ""
	I1101 09:21:00.941436  256846 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:21:00.953787  256846 retry.go:31] will retry after 336.784301ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:21:00Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:21:01.291476  256846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:21:01.305469  256846 pause.go:52] kubelet running: false
	I1101 09:21:01.305554  256846 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:21:01.442621  256846 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:21:01.442731  256846 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:21:01.514446  256846 cri.go:89] found id: "77f34a751546cae898ab472655f56c5c32eccc085038636d42ef139d6abf16c6"
	I1101 09:21:01.514466  256846 cri.go:89] found id: "4c720398bb4fa757d679305ceeef216ff753830d7ac6b7c1e25b129c46813c05"
	I1101 09:21:01.514470  256846 cri.go:89] found id: "5b6c9b666836f422d8078161076c25e4166a0975599c4a826a525cd241adf6aa"
	I1101 09:21:01.514474  256846 cri.go:89] found id: "0e646a0053e644e0c8aaf75d4f21ac7cb97c0c1c5004411b54aea626f1cb6948"
	I1101 09:21:01.514477  256846 cri.go:89] found id: "a4a5247c122cf0b4dc4fa5a348dbacca1caae7a3afc1116e75392ae7a3ec6dd2"
	I1101 09:21:01.514481  256846 cri.go:89] found id: "b3160f6c872bd74ec717f3bdc6505f4d22b0d84f6f4142f3e0529df91607f430"
	I1101 09:21:01.514483  256846 cri.go:89] found id: "872d88871004564b9fec7c298dff556377a0ff948066c78e7c1b182d8b6271f4"
	I1101 09:21:01.514485  256846 cri.go:89] found id: "a606cb94dcd3aedf1e555a4a15b3aa6ab1c1fbfea9bfe61bea7fb5777b59cb5a"
	I1101 09:21:01.514488  256846 cri.go:89] found id: "e85dd09f3807f74307934a4f301776d16ac8f60eb9cc757e6f6bf2553901baef"
	I1101 09:21:01.514493  256846 cri.go:89] found id: "8270b89462e20b129ac3e5fdb15f4995bd39adc8b8af28d2aa2bfb55eb070be9"
	I1101 09:21:01.514496  256846 cri.go:89] found id: "6cd6c8363476bd0a34deef513a204c6da2611045224adb8efc65c2778e33c742"
	I1101 09:21:01.514498  256846 cri.go:89] found id: ""
	I1101 09:21:01.514545  256846 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:21:01.527029  256846 retry.go:31] will retry after 419.947757ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:21:01Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:21:01.947556  256846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:21:01.961625  256846 pause.go:52] kubelet running: false
	I1101 09:21:01.961717  256846 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:21:02.103434  256846 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:21:02.103525  256846 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:21:02.173513  256846 cri.go:89] found id: "77f34a751546cae898ab472655f56c5c32eccc085038636d42ef139d6abf16c6"
	I1101 09:21:02.173539  256846 cri.go:89] found id: "4c720398bb4fa757d679305ceeef216ff753830d7ac6b7c1e25b129c46813c05"
	I1101 09:21:02.173544  256846 cri.go:89] found id: "5b6c9b666836f422d8078161076c25e4166a0975599c4a826a525cd241adf6aa"
	I1101 09:21:02.173548  256846 cri.go:89] found id: "0e646a0053e644e0c8aaf75d4f21ac7cb97c0c1c5004411b54aea626f1cb6948"
	I1101 09:21:02.173551  256846 cri.go:89] found id: "a4a5247c122cf0b4dc4fa5a348dbacca1caae7a3afc1116e75392ae7a3ec6dd2"
	I1101 09:21:02.173554  256846 cri.go:89] found id: "b3160f6c872bd74ec717f3bdc6505f4d22b0d84f6f4142f3e0529df91607f430"
	I1101 09:21:02.173556  256846 cri.go:89] found id: "872d88871004564b9fec7c298dff556377a0ff948066c78e7c1b182d8b6271f4"
	I1101 09:21:02.173558  256846 cri.go:89] found id: "a606cb94dcd3aedf1e555a4a15b3aa6ab1c1fbfea9bfe61bea7fb5777b59cb5a"
	I1101 09:21:02.173561  256846 cri.go:89] found id: "e85dd09f3807f74307934a4f301776d16ac8f60eb9cc757e6f6bf2553901baef"
	I1101 09:21:02.173571  256846 cri.go:89] found id: "8270b89462e20b129ac3e5fdb15f4995bd39adc8b8af28d2aa2bfb55eb070be9"
	I1101 09:21:02.173574  256846 cri.go:89] found id: "6cd6c8363476bd0a34deef513a204c6da2611045224adb8efc65c2778e33c742"
	I1101 09:21:02.173576  256846 cri.go:89] found id: ""
	I1101 09:21:02.173613  256846 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:21:02.185856  256846 retry.go:31] will retry after 461.290072ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:21:02Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:21:02.647565  256846 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:21:02.661500  256846 pause.go:52] kubelet running: false
	I1101 09:21:02.661557  256846 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:21:02.810925  256846 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:21:02.811015  256846 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:21:02.883671  256846 cri.go:89] found id: "77f34a751546cae898ab472655f56c5c32eccc085038636d42ef139d6abf16c6"
	I1101 09:21:02.883692  256846 cri.go:89] found id: "4c720398bb4fa757d679305ceeef216ff753830d7ac6b7c1e25b129c46813c05"
	I1101 09:21:02.883696  256846 cri.go:89] found id: "5b6c9b666836f422d8078161076c25e4166a0975599c4a826a525cd241adf6aa"
	I1101 09:21:02.883698  256846 cri.go:89] found id: "0e646a0053e644e0c8aaf75d4f21ac7cb97c0c1c5004411b54aea626f1cb6948"
	I1101 09:21:02.883701  256846 cri.go:89] found id: "a4a5247c122cf0b4dc4fa5a348dbacca1caae7a3afc1116e75392ae7a3ec6dd2"
	I1101 09:21:02.883704  256846 cri.go:89] found id: "b3160f6c872bd74ec717f3bdc6505f4d22b0d84f6f4142f3e0529df91607f430"
	I1101 09:21:02.883706  256846 cri.go:89] found id: "872d88871004564b9fec7c298dff556377a0ff948066c78e7c1b182d8b6271f4"
	I1101 09:21:02.883708  256846 cri.go:89] found id: "a606cb94dcd3aedf1e555a4a15b3aa6ab1c1fbfea9bfe61bea7fb5777b59cb5a"
	I1101 09:21:02.883711  256846 cri.go:89] found id: "e85dd09f3807f74307934a4f301776d16ac8f60eb9cc757e6f6bf2553901baef"
	I1101 09:21:02.883716  256846 cri.go:89] found id: "8270b89462e20b129ac3e5fdb15f4995bd39adc8b8af28d2aa2bfb55eb070be9"
	I1101 09:21:02.883730  256846 cri.go:89] found id: "6cd6c8363476bd0a34deef513a204c6da2611045224adb8efc65c2778e33c742"
	I1101 09:21:02.883733  256846 cri.go:89] found id: ""
	I1101 09:21:02.883799  256846 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:21:02.899078  256846 out.go:203] 
	W1101 09:21:02.900522  256846 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:21:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:21:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:21:02.900551  256846 out.go:285] * 
	* 
	W1101 09:21:02.905300  256846 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:21:02.906751  256846 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-152344 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-152344
helpers_test.go:243: (dbg) docker inspect old-k8s-version-152344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe",
	        "Created": "2025-11-01T09:18:45.394454049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 245536,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:19:57.732800789Z",
	            "FinishedAt": "2025-11-01T09:19:56.784093761Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe/hostname",
	        "HostsPath": "/var/lib/docker/containers/89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe/hosts",
	        "LogPath": "/var/lib/docker/containers/89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe/89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe-json.log",
	        "Name": "/old-k8s-version-152344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-152344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-152344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe",
	                "LowerDir": "/var/lib/docker/overlay2/2167d10a6be83eefc462824ae671de179964763d19a49dc3e2df049d863ec511-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2167d10a6be83eefc462824ae671de179964763d19a49dc3e2df049d863ec511/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2167d10a6be83eefc462824ae671de179964763d19a49dc3e2df049d863ec511/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2167d10a6be83eefc462824ae671de179964763d19a49dc3e2df049d863ec511/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-152344",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-152344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-152344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-152344",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-152344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58cd051a9311c7483583bc65aad5229ec67ddeaaf8519b2168d3e6d6233f7dac",
	            "SandboxKey": "/var/run/docker/netns/58cd051a9311",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-152344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:7e:7e:45:39:d0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "afdc78f81dc3d841ed82d50aa51ef8a188690396e16fb0187f6b53f70953a37a",
	                    "EndpointID": "ea1fc51d54fe5d9076c50c01985fbcfe49b42a1ca168a7591cc62ce1af1b94b8",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-152344",
	                        "89c3ec5c14cb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-152344 -n old-k8s-version-152344
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-152344 -n old-k8s-version-152344: exit status 2 (376.745654ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-152344 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-152344 logs -n 25: (1.282175221s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-204434 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo crio config                                                                                                                                                                                                             │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ delete  │ -p cilium-204434                                                                                                                                                                                                                              │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:18 UTC │
	│ start   │ -p old-k8s-version-152344 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:19 UTC │
	│ delete  │ -p running-upgrade-274843                                                                                                                                                                                                                     │ running-upgrade-274843 │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:18 UTC │
	│ start   │ -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-152344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ stop    │ -p old-k8s-version-152344 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ start   │ -p cert-expiration-303094 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-303094 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-397460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ delete  │ -p cert-expiration-303094                                                                                                                                                                                                                     │ cert-expiration-303094 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ stop    │ -p no-preload-397460 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p embed-certs-236314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-152344 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ start   │ -p old-k8s-version-152344 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p no-preload-397460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-236314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │                     │
	│ stop    │ -p embed-certs-236314 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-236314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p embed-certs-236314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │                     │
	│ image   │ old-k8s-version-152344 image list --format=json                                                                                                                                                                                               │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p old-k8s-version-152344 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ image   │ no-preload-397460 image list --format=json                                                                                                                                                                                                    │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:20:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:20:59.428556  256247 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:20:59.428818  256247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:20:59.428829  256247 out.go:374] Setting ErrFile to fd 2...
	I1101 09:20:59.428834  256247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:20:59.429086  256247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:20:59.429532  256247 out.go:368] Setting JSON to false
	I1101 09:20:59.430786  256247 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3807,"bootTime":1761985052,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:20:59.430894  256247 start.go:143] virtualization: kvm guest
	I1101 09:20:59.433043  256247 out.go:179] * [embed-certs-236314] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:20:59.434385  256247 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:20:59.434401  256247 notify.go:221] Checking for updates...
	I1101 09:20:59.436934  256247 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:20:59.438200  256247 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:20:59.439375  256247 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:20:59.440741  256247 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:20:59.441964  256247 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:20:59.443571  256247 config.go:182] Loaded profile config "embed-certs-236314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:20:59.444071  256247 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:20:59.469169  256247 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:20:59.469258  256247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:20:59.527156  256247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 09:20:59.517141551 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:20:59.527271  256247 docker.go:319] overlay module found
	I1101 09:20:59.529297  256247 out.go:179] * Using the docker driver based on existing profile
	I1101 09:20:59.530647  256247 start.go:309] selected driver: docker
	I1101 09:20:59.530667  256247 start.go:930] validating driver "docker" against &{Name:embed-certs-236314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-236314 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:20:59.530767  256247 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:20:59.531375  256247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:20:59.591821  256247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 09:20:59.581513893 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:20:59.592141  256247 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:20:59.592171  256247 cni.go:84] Creating CNI manager for ""
	I1101 09:20:59.592221  256247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:20:59.592272  256247 start.go:353] cluster config:
	{Name:embed-certs-236314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-236314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:20:59.594010  256247 out.go:179] * Starting "embed-certs-236314" primary control-plane node in "embed-certs-236314" cluster
	I1101 09:20:59.595373  256247 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:20:59.596730  256247 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:20:59.598034  256247 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:20:59.598070  256247 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:20:59.598094  256247 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:20:59.598109  256247 cache.go:59] Caching tarball of preloaded images
	I1101 09:20:59.598199  256247 preload.go:233] Found /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:20:59.598213  256247 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:20:59.598344  256247 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/embed-certs-236314/config.json ...
	I1101 09:20:59.620135  256247 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:20:59.620159  256247 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:20:59.620175  256247 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:20:59.620198  256247 start.go:360] acquireMachinesLock for embed-certs-236314: {Name:mk8eda201f80ebfb2f2bb01891a2b839f76263b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:20:59.620253  256247 start.go:364] duration metric: took 37.33µs to acquireMachinesLock for "embed-certs-236314"
	I1101 09:20:59.620271  256247 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:20:59.620276  256247 fix.go:54] fixHost starting: 
	I1101 09:20:59.620535  256247 cli_runner.go:164] Run: docker container inspect embed-certs-236314 --format={{.State.Status}}
	I1101 09:20:59.639500  256247 fix.go:112] recreateIfNeeded on embed-certs-236314: state=Stopped err=<nil>
	W1101 09:20:59.639552  256247 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:20:58.468250  216020 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.063103618s)
	W1101 09:20:58.468297  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1101 09:20:58.468307  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:20:58.468322  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	
	
	==> CRI-O <==
	Nov 01 09:20:25 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:25.516312474Z" level=info msg="Created container 6cd6c8363476bd0a34deef513a204c6da2611045224adb8efc65c2778e33c742: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qjl6t/kubernetes-dashboard" id=8934bbf7-b7f1-408c-ac4a-e6acdf5446da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:25 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:25.517057832Z" level=info msg="Starting container: 6cd6c8363476bd0a34deef513a204c6da2611045224adb8efc65c2778e33c742" id=5ed41685-d974-46d8-b7e7-6515cd40b88f name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:20:25 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:25.519430139Z" level=info msg="Started container" PID=1705 containerID=6cd6c8363476bd0a34deef513a204c6da2611045224adb8efc65c2778e33c742 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qjl6t/kubernetes-dashboard id=5ed41685-d974-46d8-b7e7-6515cd40b88f name=/runtime.v1.RuntimeService/StartContainer sandboxID=2c7ddba118785d2deb1fc9673a117ff1b35ba7728aadfca5a95a4f48f62329af
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.163574837Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=27ff8c51-7c4d-4ff0-8e43-dcfea4b0460f name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.164576036Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b06a6613-7caa-4c25-ba21-61165e9288a4 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.165679763Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1ff6bd49-468f-47ae-9c10-617e48fffd0b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.165831982Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.171322821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.171532521Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2e436ef81431ba7cf3db6b233fcc814196c8653e4428b79b25f2ea3d552e74f6/merged/etc/passwd: no such file or directory"
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.171572584Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2e436ef81431ba7cf3db6b233fcc814196c8653e4428b79b25f2ea3d552e74f6/merged/etc/group: no such file or directory"
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.171930821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.199953394Z" level=info msg="Created container 77f34a751546cae898ab472655f56c5c32eccc085038636d42ef139d6abf16c6: kube-system/storage-provisioner/storage-provisioner" id=1ff6bd49-468f-47ae-9c10-617e48fffd0b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.200756258Z" level=info msg="Starting container: 77f34a751546cae898ab472655f56c5c32eccc085038636d42ef139d6abf16c6" id=165649da-dc73-441f-8857-37c9f13b58ae name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.202684475Z" level=info msg="Started container" PID=1727 containerID=77f34a751546cae898ab472655f56c5c32eccc085038636d42ef139d6abf16c6 description=kube-system/storage-provisioner/storage-provisioner id=165649da-dc73-441f-8857-37c9f13b58ae name=/runtime.v1.RuntimeService/StartContainer sandboxID=767963ee6ddc60b9fb8cd5aeb646be453ff51b86ada9b46b97faa7bd74f197ec
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.044637254Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b6d4af81-4ee4-4fac-a0d4-c0fd97817899 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.045576541Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5536f350-e180-4d53-904c-4ca9da79b95a name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.0466031Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs/dashboard-metrics-scraper" id=b182efb6-832e-4f64-9c24-5f56993671f1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.046773798Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.054073889Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.054755445Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.085859311Z" level=info msg="Created container 8270b89462e20b129ac3e5fdb15f4995bd39adc8b8af28d2aa2bfb55eb070be9: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs/dashboard-metrics-scraper" id=b182efb6-832e-4f64-9c24-5f56993671f1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.086568399Z" level=info msg="Starting container: 8270b89462e20b129ac3e5fdb15f4995bd39adc8b8af28d2aa2bfb55eb070be9" id=14f870ad-8a21-4589-8310-d1afca071fc4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.088668685Z" level=info msg="Started container" PID=1743 containerID=8270b89462e20b129ac3e5fdb15f4995bd39adc8b8af28d2aa2bfb55eb070be9 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs/dashboard-metrics-scraper id=14f870ad-8a21-4589-8310-d1afca071fc4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f32c0500baeb9bfc9c20637e5f7099b0e272f79f99dd595898529199f9021069
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.184978433Z" level=info msg="Removing container: 5e215f9c05cd829424f63ba914abecae250267dd476388a79b478303e529116e" id=31ebe799-0cc6-4136-86bc-c60091797e73 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.196079441Z" level=info msg="Removed container 5e215f9c05cd829424f63ba914abecae250267dd476388a79b478303e529116e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs/dashboard-metrics-scraper" id=31ebe799-0cc6-4136-86bc-c60091797e73 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	8270b89462e20       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   f32c0500baeb9       dashboard-metrics-scraper-5f989dc9cf-j7gbs       kubernetes-dashboard
	77f34a751546c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   767963ee6ddc6       storage-provisioner                              kube-system
	6cd6c8363476b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   2c7ddba118785       kubernetes-dashboard-8694d4445c-qjl6t            kubernetes-dashboard
	4c720398bb4fa       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           56 seconds ago      Running             coredns                     0                   b0ddd4ebafc34       coredns-5dd5756b68-gcvgr                         kube-system
	8608b0f7e0e07       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   02eb1388f086c       busybox                                          default
	5b6c9b666836f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   767963ee6ddc6       storage-provisioner                              kube-system
	0e646a0053e64       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           56 seconds ago      Running             kube-proxy                  0                   965c4dffa077c       kube-proxy-w5hpl                                 kube-system
	a4a5247c122cf       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   8e807dda15af0       kindnet-9lbnx                                    kube-system
	b3160f6c872bd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           59 seconds ago      Running             etcd                        0                   de447a4a537e9       etcd-old-k8s-version-152344                      kube-system
	872d888710045       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           59 seconds ago      Running             kube-apiserver              0                   eca798aec45c8       kube-apiserver-old-k8s-version-152344            kube-system
	a606cb94dcd3a       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           59 seconds ago      Running             kube-controller-manager     0                   95056a4e7e624       kube-controller-manager-old-k8s-version-152344   kube-system
	e85dd09f3807f       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           59 seconds ago      Running             kube-scheduler              0                   d4cf3d87b578c       kube-scheduler-old-k8s-version-152344            kube-system
	
	
	==> coredns [4c720398bb4fa757d679305ceeef216ff753830d7ac6b7c1e25b129c46813c05] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39631 - 6829 "HINFO IN 5249536083815428656.6825250384916931008. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.492555733s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-152344
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-152344
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=old-k8s-version-152344
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_19_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:18:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-152344
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:20:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:20:37 +0000   Sat, 01 Nov 2025 09:18:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:20:37 +0000   Sat, 01 Nov 2025 09:18:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:20:37 +0000   Sat, 01 Nov 2025 09:18:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:20:37 +0000   Sat, 01 Nov 2025 09:19:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-152344
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2997294f-a5eb-4a19-8c2c-94960c03c89f
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-gcvgr                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-old-k8s-version-152344                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m4s
	  kube-system                 kindnet-9lbnx                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-152344             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-old-k8s-version-152344    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-w5hpl                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-152344             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-j7gbs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-qjl6t             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                   From             Message
	  ----    ------                   ----                  ----             -------
	  Normal  Starting                 109s                  kube-proxy       
	  Normal  Starting                 56s                   kube-proxy       
	  Normal  Starting                 2m10s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m9s (x9 over 2m10s)  kubelet          Node old-k8s-version-152344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m10s)  kubelet          Node old-k8s-version-152344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x7 over 2m10s)  kubelet          Node old-k8s-version-152344 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m4s                  kubelet          Node old-k8s-version-152344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m4s                  kubelet          Node old-k8s-version-152344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m4s                  kubelet          Node old-k8s-version-152344 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                  node-controller  Node old-k8s-version-152344 event: Registered Node old-k8s-version-152344 in Controller
	  Normal  NodeReady                98s                   kubelet          Node old-k8s-version-152344 status is now: NodeReady
	  Normal  Starting                 60s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)     kubelet          Node old-k8s-version-152344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)     kubelet          Node old-k8s-version-152344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)     kubelet          Node old-k8s-version-152344 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                   node-controller  Node old-k8s-version-152344 event: Registered Node old-k8s-version-152344 in Controller
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.047726] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[Nov 1 08:38] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.215674] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.047939] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000023] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023915] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023867] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023880] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000033] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023884] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +4.031524] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +8.255123] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	
	
	==> etcd [b3160f6c872bd74ec717f3bdc6505f4d22b0d84f6f4142f3e0529df91607f430] <==
	{"level":"info","ts":"2025-11-01T09:20:04.630536Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-11-01T09:20:04.630659Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:20:04.63069Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:20:04.630817Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T09:20:04.631063Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T09:20:04.631065Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-01T09:20:04.631091Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T09:20:04.631104Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-01T09:20:04.631478Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T09:20:04.631509Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T09:20:04.631519Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T09:20:05.818797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T09:20:05.818912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T09:20:05.818929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-11-01T09:20:05.818941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T09:20:05.818947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-11-01T09:20:05.818955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-11-01T09:20:05.818962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-11-01T09:20:05.820699Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-152344 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T09:20:05.820691Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:20:05.820745Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:20:05.821047Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T09:20:05.821072Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-01T09:20:05.82245Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T09:20:05.823036Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 09:21:04 up  1:03,  0 user,  load average: 2.81, 2.49, 1.53
	Linux old-k8s-version-152344 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a4a5247c122cf0b4dc4fa5a348dbacca1caae7a3afc1116e75392ae7a3ec6dd2] <==
	I1101 09:20:07.675406       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:20:07.675761       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1101 09:20:07.675950       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:20:07.675974       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:20:07.676008       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:20:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:20:07.953472       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:20:07.953497       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:20:07.953508       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:20:08.046165       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:20:08.453848       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:20:08.453992       1 metrics.go:72] Registering metrics
	I1101 09:20:08.454113       1 controller.go:711] "Syncing nftables rules"
	I1101 09:20:17.952939       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:20:17.953001       1 main.go:301] handling current node
	I1101 09:20:27.953089       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:20:27.953124       1 main.go:301] handling current node
	I1101 09:20:37.953030       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:20:37.953067       1 main.go:301] handling current node
	I1101 09:20:47.953082       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:20:47.953135       1 main.go:301] handling current node
	I1101 09:20:57.958982       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:20:57.959018       1 main.go:301] handling current node
	
	
	==> kube-apiserver [872d88871004564b9fec7c298dff556377a0ff948066c78e7c1b182d8b6271f4] <==
	I1101 09:20:06.846715       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 09:20:06.848214       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:20:06.892091       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 09:20:06.892021       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:20:06.892374       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 09:20:06.892432       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 09:20:06.892451       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 09:20:06.892454       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 09:20:06.894411       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 09:20:06.894447       1 aggregator.go:166] initial CRD sync complete...
	I1101 09:20:06.894455       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 09:20:06.894461       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:20:06.894468       1 cache.go:39] Caches are synced for autoregister controller
	E1101 09:20:06.897719       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:20:07.795598       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:20:08.011808       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 09:20:08.067192       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 09:20:08.108133       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:20:08.118057       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:20:08.127952       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 09:20:08.180017       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.192.95"}
	I1101 09:20:08.196762       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.150.241"}
	I1101 09:20:19.045750       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 09:20:19.175886       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 09:20:19.271108       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a606cb94dcd3aedf1e555a4a15b3aa6ab1c1fbfea9bfe61bea7fb5777b59cb5a] <==
	I1101 09:20:19.201995       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1101 09:20:19.203134       1 shared_informer.go:318] Caches are synced for daemon sets
	I1101 09:20:19.206855       1 shared_informer.go:318] Caches are synced for TTL
	I1101 09:20:19.210676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="28.085097ms"
	I1101 09:20:19.218405       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="35.229798ms"
	I1101 09:20:19.221759       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 09:20:19.223786       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.992128ms"
	I1101 09:20:19.231169       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.323894ms"
	I1101 09:20:19.231167       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="12.638865ms"
	I1101 09:20:19.231314       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.933µs"
	I1101 09:20:19.238919       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="90.105µs"
	I1101 09:20:19.252214       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.553µs"
	I1101 09:20:19.296081       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1101 09:20:19.636541       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 09:20:19.654648       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 09:20:19.654683       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 09:20:22.133252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.079µs"
	I1101 09:20:23.141628       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.365µs"
	I1101 09:20:24.142647       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.398µs"
	I1101 09:20:26.156281       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.209306ms"
	I1101 09:20:26.156411       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.223µs"
	I1101 09:20:44.196967       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="88.071µs"
	I1101 09:20:47.234625       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.154737ms"
	I1101 09:20:47.234763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.034µs"
	I1101 09:20:49.526702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.53µs"
	
	
	==> kube-proxy [0e646a0053e644e0c8aaf75d4f21ac7cb97c0c1c5004411b54aea626f1cb6948] <==
	I1101 09:20:07.567348       1 server_others.go:69] "Using iptables proxy"
	I1101 09:20:07.583432       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1101 09:20:07.609424       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:20:07.612699       1 server_others.go:152] "Using iptables Proxier"
	I1101 09:20:07.612857       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 09:20:07.612924       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 09:20:07.612974       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 09:20:07.613293       1 server.go:846] "Version info" version="v1.28.0"
	I1101 09:20:07.613429       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:20:07.614514       1 config.go:188] "Starting service config controller"
	I1101 09:20:07.615389       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 09:20:07.615135       1 config.go:315] "Starting node config controller"
	I1101 09:20:07.615516       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 09:20:07.615322       1 config.go:97] "Starting endpoint slice config controller"
	I1101 09:20:07.615528       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 09:20:07.716271       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 09:20:07.716331       1 shared_informer.go:318] Caches are synced for service config
	I1101 09:20:07.716652       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e85dd09f3807f74307934a4f301776d16ac8f60eb9cc757e6f6bf2553901baef] <==
	I1101 09:20:05.257332       1 serving.go:348] Generated self-signed cert in-memory
	W1101 09:20:06.811478       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:20:06.811512       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:20:06.811525       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:20:06.811534       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:20:06.841889       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1101 09:20:06.841932       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:20:06.843791       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:20:06.843879       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 09:20:06.845852       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1101 09:20:06.845972       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 09:20:06.944492       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 09:20:19 old-k8s-version-152344 kubelet[718]: I1101 09:20:19.333382     718 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/827e6d08-5ed0-451b-84d3-91922812871c-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-qjl6t\" (UID: \"827e6d08-5ed0-451b-84d3-91922812871c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qjl6t"
	Nov 01 09:20:19 old-k8s-version-152344 kubelet[718]: I1101 09:20:19.333445     718 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/44708f58-737c-4f77-af56-0cee23c3d247-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-j7gbs\" (UID: \"44708f58-737c-4f77-af56-0cee23c3d247\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs"
	Nov 01 09:20:19 old-k8s-version-152344 kubelet[718]: I1101 09:20:19.333482     718 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sv74\" (UniqueName: \"kubernetes.io/projected/827e6d08-5ed0-451b-84d3-91922812871c-kube-api-access-5sv74\") pod \"kubernetes-dashboard-8694d4445c-qjl6t\" (UID: \"827e6d08-5ed0-451b-84d3-91922812871c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qjl6t"
	Nov 01 09:20:19 old-k8s-version-152344 kubelet[718]: I1101 09:20:19.333515     718 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5pqh\" (UniqueName: \"kubernetes.io/projected/44708f58-737c-4f77-af56-0cee23c3d247-kube-api-access-j5pqh\") pod \"dashboard-metrics-scraper-5f989dc9cf-j7gbs\" (UID: \"44708f58-737c-4f77-af56-0cee23c3d247\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs"
	Nov 01 09:20:22 old-k8s-version-152344 kubelet[718]: I1101 09:20:22.120527     718 scope.go:117] "RemoveContainer" containerID="f84d53fdd254a7b97e86cde778ebec6d0fb72e4efc277a406e88166a9e1197b8"
	Nov 01 09:20:23 old-k8s-version-152344 kubelet[718]: I1101 09:20:23.125599     718 scope.go:117] "RemoveContainer" containerID="f84d53fdd254a7b97e86cde778ebec6d0fb72e4efc277a406e88166a9e1197b8"
	Nov 01 09:20:23 old-k8s-version-152344 kubelet[718]: I1101 09:20:23.126094     718 scope.go:117] "RemoveContainer" containerID="5e215f9c05cd829424f63ba914abecae250267dd476388a79b478303e529116e"
	Nov 01 09:20:23 old-k8s-version-152344 kubelet[718]: E1101 09:20:23.126457     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-j7gbs_kubernetes-dashboard(44708f58-737c-4f77-af56-0cee23c3d247)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs" podUID="44708f58-737c-4f77-af56-0cee23c3d247"
	Nov 01 09:20:24 old-k8s-version-152344 kubelet[718]: I1101 09:20:24.131056     718 scope.go:117] "RemoveContainer" containerID="5e215f9c05cd829424f63ba914abecae250267dd476388a79b478303e529116e"
	Nov 01 09:20:24 old-k8s-version-152344 kubelet[718]: E1101 09:20:24.131421     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-j7gbs_kubernetes-dashboard(44708f58-737c-4f77-af56-0cee23c3d247)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs" podUID="44708f58-737c-4f77-af56-0cee23c3d247"
	Nov 01 09:20:29 old-k8s-version-152344 kubelet[718]: I1101 09:20:29.515580     718 scope.go:117] "RemoveContainer" containerID="5e215f9c05cd829424f63ba914abecae250267dd476388a79b478303e529116e"
	Nov 01 09:20:29 old-k8s-version-152344 kubelet[718]: E1101 09:20:29.516036     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-j7gbs_kubernetes-dashboard(44708f58-737c-4f77-af56-0cee23c3d247)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs" podUID="44708f58-737c-4f77-af56-0cee23c3d247"
	Nov 01 09:20:38 old-k8s-version-152344 kubelet[718]: I1101 09:20:38.163030     718 scope.go:117] "RemoveContainer" containerID="5b6c9b666836f422d8078161076c25e4166a0975599c4a826a525cd241adf6aa"
	Nov 01 09:20:38 old-k8s-version-152344 kubelet[718]: I1101 09:20:38.175457     718 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qjl6t" podStartSLOduration=13.251694977 podCreationTimestamp="2025-11-01 09:20:19 +0000 UTC" firstStartedPulling="2025-11-01 09:20:19.546658819 +0000 UTC m=+15.610599029" lastFinishedPulling="2025-11-01 09:20:25.470311403 +0000 UTC m=+21.534251611" observedRunningTime="2025-11-01 09:20:26.149835904 +0000 UTC m=+22.213776130" watchObservedRunningTime="2025-11-01 09:20:38.175347559 +0000 UTC m=+34.239287786"
	Nov 01 09:20:44 old-k8s-version-152344 kubelet[718]: I1101 09:20:44.043991     718 scope.go:117] "RemoveContainer" containerID="5e215f9c05cd829424f63ba914abecae250267dd476388a79b478303e529116e"
	Nov 01 09:20:44 old-k8s-version-152344 kubelet[718]: I1101 09:20:44.183743     718 scope.go:117] "RemoveContainer" containerID="5e215f9c05cd829424f63ba914abecae250267dd476388a79b478303e529116e"
	Nov 01 09:20:44 old-k8s-version-152344 kubelet[718]: I1101 09:20:44.184044     718 scope.go:117] "RemoveContainer" containerID="8270b89462e20b129ac3e5fdb15f4995bd39adc8b8af28d2aa2bfb55eb070be9"
	Nov 01 09:20:44 old-k8s-version-152344 kubelet[718]: E1101 09:20:44.184425     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-j7gbs_kubernetes-dashboard(44708f58-737c-4f77-af56-0cee23c3d247)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs" podUID="44708f58-737c-4f77-af56-0cee23c3d247"
	Nov 01 09:20:49 old-k8s-version-152344 kubelet[718]: I1101 09:20:49.515730     718 scope.go:117] "RemoveContainer" containerID="8270b89462e20b129ac3e5fdb15f4995bd39adc8b8af28d2aa2bfb55eb070be9"
	Nov 01 09:20:49 old-k8s-version-152344 kubelet[718]: E1101 09:20:49.516070     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-j7gbs_kubernetes-dashboard(44708f58-737c-4f77-af56-0cee23c3d247)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs" podUID="44708f58-737c-4f77-af56-0cee23c3d247"
	Nov 01 09:21:00 old-k8s-version-152344 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:21:00 old-k8s-version-152344 kubelet[718]: I1101 09:21:00.855471     718 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 01 09:21:00 old-k8s-version-152344 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:21:00 old-k8s-version-152344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:21:00 old-k8s-version-152344 systemd[1]: kubelet.service: Consumed 1.662s CPU time.
	
	
	==> kubernetes-dashboard [6cd6c8363476bd0a34deef513a204c6da2611045224adb8efc65c2778e33c742] <==
	2025/11/01 09:20:25 Starting overwatch
	2025/11/01 09:20:25 Using namespace: kubernetes-dashboard
	2025/11/01 09:20:25 Using in-cluster config to connect to apiserver
	2025/11/01 09:20:25 Using secret token for csrf signing
	2025/11/01 09:20:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:20:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:20:25 Successful initial request to the apiserver, version: v1.28.0
	2025/11/01 09:20:25 Generating JWE encryption key
	2025/11/01 09:20:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:20:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:20:26 Initializing JWE encryption key from synchronized object
	2025/11/01 09:20:26 Creating in-cluster Sidecar client
	2025/11/01 09:20:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:20:26 Serving insecurely on HTTP port: 9090
	2025/11/01 09:20:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [5b6c9b666836f422d8078161076c25e4166a0975599c4a826a525cd241adf6aa] <==
	I1101 09:20:07.487657       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:20:37.495639       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [77f34a751546cae898ab472655f56c5c32eccc085038636d42ef139d6abf16c6] <==
	I1101 09:20:38.216005       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:20:38.224919       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:20:38.224984       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 09:20:55.622291       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:20:55.622477       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-152344_03ce407e-ed10-49cd-badc-9e9a44d78715!
	I1101 09:20:55.622440       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d54d713e-3c09-4409-a20a-85838e16fc43", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-152344_03ce407e-ed10-49cd-badc-9e9a44d78715 became leader
	I1101 09:20:55.723429       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-152344_03ce407e-ed10-49cd-badc-9e9a44d78715!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-152344 -n old-k8s-version-152344
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-152344 -n old-k8s-version-152344: exit status 2 (361.090848ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-152344 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-152344
helpers_test.go:243: (dbg) docker inspect old-k8s-version-152344:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe",
	        "Created": "2025-11-01T09:18:45.394454049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 245536,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:19:57.732800789Z",
	            "FinishedAt": "2025-11-01T09:19:56.784093761Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe/hostname",
	        "HostsPath": "/var/lib/docker/containers/89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe/hosts",
	        "LogPath": "/var/lib/docker/containers/89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe/89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe-json.log",
	        "Name": "/old-k8s-version-152344",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-152344:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-152344",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "89c3ec5c14cb5bc7fcba09480cd2896570f287e2f8a794c80fecfe7d058e83fe",
	                "LowerDir": "/var/lib/docker/overlay2/2167d10a6be83eefc462824ae671de179964763d19a49dc3e2df049d863ec511-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2167d10a6be83eefc462824ae671de179964763d19a49dc3e2df049d863ec511/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2167d10a6be83eefc462824ae671de179964763d19a49dc3e2df049d863ec511/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2167d10a6be83eefc462824ae671de179964763d19a49dc3e2df049d863ec511/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-152344",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-152344/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-152344",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-152344",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-152344",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58cd051a9311c7483583bc65aad5229ec67ddeaaf8519b2168d3e6d6233f7dac",
	            "SandboxKey": "/var/run/docker/netns/58cd051a9311",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-152344": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:7e:7e:45:39:d0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "afdc78f81dc3d841ed82d50aa51ef8a188690396e16fb0187f6b53f70953a37a",
	                    "EndpointID": "ea1fc51d54fe5d9076c50c01985fbcfe49b42a1ca168a7591cc62ce1af1b94b8",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-152344",
	                        "89c3ec5c14cb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-152344 -n old-k8s-version-152344
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-152344 -n old-k8s-version-152344: exit status 2 (369.762072ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-152344 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-152344 logs -n 25: (1.331013813s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-204434 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo crio config                                                                                                                                                                                                             │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ delete  │ -p cilium-204434                                                                                                                                                                                                                              │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:18 UTC │
	│ start   │ -p old-k8s-version-152344 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:19 UTC │
	│ delete  │ -p running-upgrade-274843                                                                                                                                                                                                                     │ running-upgrade-274843 │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:18 UTC │
	│ start   │ -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-152344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ stop    │ -p old-k8s-version-152344 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ start   │ -p cert-expiration-303094 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-303094 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-397460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ delete  │ -p cert-expiration-303094                                                                                                                                                                                                                     │ cert-expiration-303094 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ stop    │ -p no-preload-397460 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p embed-certs-236314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-152344 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ start   │ -p old-k8s-version-152344 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p no-preload-397460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-236314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │                     │
	│ stop    │ -p embed-certs-236314 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-236314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p embed-certs-236314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │                     │
	│ image   │ old-k8s-version-152344 image list --format=json                                                                                                                                                                                               │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p old-k8s-version-152344 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ image   │ no-preload-397460 image list --format=json                                                                                                                                                                                                    │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p no-preload-397460 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:20:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:20:59.428556  256247 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:20:59.428818  256247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:20:59.428829  256247 out.go:374] Setting ErrFile to fd 2...
	I1101 09:20:59.428834  256247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:20:59.429086  256247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:20:59.429532  256247 out.go:368] Setting JSON to false
	I1101 09:20:59.430786  256247 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3807,"bootTime":1761985052,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:20:59.430894  256247 start.go:143] virtualization: kvm guest
	I1101 09:20:59.433043  256247 out.go:179] * [embed-certs-236314] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:20:59.434385  256247 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:20:59.434401  256247 notify.go:221] Checking for updates...
	I1101 09:20:59.436934  256247 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:20:59.438200  256247 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:20:59.439375  256247 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:20:59.440741  256247 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:20:59.441964  256247 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:20:59.443571  256247 config.go:182] Loaded profile config "embed-certs-236314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:20:59.444071  256247 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:20:59.469169  256247 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:20:59.469258  256247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:20:59.527156  256247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 09:20:59.517141551 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:20:59.527271  256247 docker.go:319] overlay module found
	I1101 09:20:59.529297  256247 out.go:179] * Using the docker driver based on existing profile
	I1101 09:20:59.530647  256247 start.go:309] selected driver: docker
	I1101 09:20:59.530667  256247 start.go:930] validating driver "docker" against &{Name:embed-certs-236314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-236314 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:20:59.530767  256247 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:20:59.531375  256247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:20:59.591821  256247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 09:20:59.581513893 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:20:59.592141  256247 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:20:59.592171  256247 cni.go:84] Creating CNI manager for ""
	I1101 09:20:59.592221  256247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:20:59.592272  256247 start.go:353] cluster config:
	{Name:embed-certs-236314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-236314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:20:59.594010  256247 out.go:179] * Starting "embed-certs-236314" primary control-plane node in "embed-certs-236314" cluster
	I1101 09:20:59.595373  256247 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:20:59.596730  256247 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:20:59.598034  256247 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:20:59.598070  256247 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:20:59.598094  256247 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:20:59.598109  256247 cache.go:59] Caching tarball of preloaded images
	I1101 09:20:59.598199  256247 preload.go:233] Found /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:20:59.598213  256247 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:20:59.598344  256247 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/embed-certs-236314/config.json ...
	I1101 09:20:59.620135  256247 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:20:59.620159  256247 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:20:59.620175  256247 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:20:59.620198  256247 start.go:360] acquireMachinesLock for embed-certs-236314: {Name:mk8eda201f80ebfb2f2bb01891a2b839f76263b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:20:59.620253  256247 start.go:364] duration metric: took 37.33µs to acquireMachinesLock for "embed-certs-236314"
	I1101 09:20:59.620271  256247 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:20:59.620276  256247 fix.go:54] fixHost starting: 
	I1101 09:20:59.620535  256247 cli_runner.go:164] Run: docker container inspect embed-certs-236314 --format={{.State.Status}}
	I1101 09:20:59.639500  256247 fix.go:112] recreateIfNeeded on embed-certs-236314: state=Stopped err=<nil>
	W1101 09:20:59.639552  256247 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:20:58.468250  216020 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.063103618s)
	W1101 09:20:58.468297  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1101 09:20:58.468307  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:20:58.468322  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:20:59.641465  256247 out.go:252] * Restarting existing docker container for "embed-certs-236314" ...
	I1101 09:20:59.641539  256247 cli_runner.go:164] Run: docker start embed-certs-236314
	I1101 09:20:59.902231  256247 cli_runner.go:164] Run: docker container inspect embed-certs-236314 --format={{.State.Status}}
	I1101 09:20:59.922418  256247 kic.go:430] container "embed-certs-236314" state is running.
	I1101 09:20:59.922774  256247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-236314
	I1101 09:20:59.942287  256247 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/embed-certs-236314/config.json ...
	I1101 09:20:59.942510  256247 machine.go:94] provisionDockerMachine start ...
	I1101 09:20:59.942564  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:20:59.962745  256247 main.go:143] libmachine: Using SSH client type: native
	I1101 09:20:59.963041  256247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 09:20:59.963056  256247 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:20:59.963759  256247 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36770->127.0.0.1:33078: read: connection reset by peer
	I1101 09:21:03.117563  256247 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-236314
	
	I1101 09:21:03.117594  256247 ubuntu.go:182] provisioning hostname "embed-certs-236314"
	I1101 09:21:03.117656  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:03.142432  256247 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:03.142735  256247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 09:21:03.142750  256247 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-236314 && echo "embed-certs-236314" | sudo tee /etc/hostname
	I1101 09:21:03.305920  256247 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-236314
	
	I1101 09:21:03.306024  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:03.331383  256247 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:03.331747  256247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 09:21:03.331779  256247 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-236314' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-236314/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-236314' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:21:03.490967  256247 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:21:03.491130  256247 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5913/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5913/.minikube}
	I1101 09:21:03.491284  256247 ubuntu.go:190] setting up certificates
	I1101 09:21:03.491299  256247 provision.go:84] configureAuth start
	I1101 09:21:03.491599  256247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-236314
	I1101 09:21:03.518311  256247 provision.go:143] copyHostCerts
	I1101 09:21:03.518365  256247 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem, removing ...
	I1101 09:21:03.518385  256247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem
	I1101 09:21:03.518461  256247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem (1078 bytes)
	I1101 09:21:03.518591  256247 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem, removing ...
	I1101 09:21:03.518605  256247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem
	I1101 09:21:03.518646  256247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem (1123 bytes)
	I1101 09:21:03.518803  256247 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem, removing ...
	I1101 09:21:03.518819  256247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem
	I1101 09:21:03.518904  256247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem (1675 bytes)
	I1101 09:21:03.519019  256247 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem org=jenkins.embed-certs-236314 san=[127.0.0.1 192.168.76.2 embed-certs-236314 localhost minikube]
	I1101 09:21:03.680093  256247 provision.go:177] copyRemoteCerts
	I1101 09:21:03.680157  256247 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:21:03.680200  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:03.704893  256247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/embed-certs-236314/id_rsa Username:docker}
	I1101 09:21:03.816511  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:21:03.837369  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 09:21:03.857322  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:21:03.878326  256247 provision.go:87] duration metric: took 387.013499ms to configureAuth
	I1101 09:21:03.878358  256247 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:21:03.878577  256247 config.go:182] Loaded profile config "embed-certs-236314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:03.878713  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:03.901022  256247 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:03.901326  256247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 09:21:03.901362  256247 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:21:04.244254  256247 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:21:04.244281  256247 machine.go:97] duration metric: took 4.30175559s to provisionDockerMachine
	I1101 09:21:04.244294  256247 start.go:293] postStartSetup for "embed-certs-236314" (driver="docker")
	I1101 09:21:04.244307  256247 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:21:04.244385  256247 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:21:04.244488  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:04.266806  256247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/embed-certs-236314/id_rsa Username:docker}
	I1101 09:21:04.373789  256247 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:21:04.379229  256247 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:21:04.379281  256247 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:21:04.379293  256247 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/addons for local assets ...
	I1101 09:21:04.379364  256247 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/files for local assets ...
	I1101 09:21:04.379500  256247 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem -> 94142.pem in /etc/ssl/certs
	I1101 09:21:04.379692  256247 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:21:04.389334  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:21:04.409193  256247 start.go:296] duration metric: took 164.885201ms for postStartSetup
	I1101 09:21:04.409277  256247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:21:04.409326  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:01.003189  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:03.032694  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:46726->192.168.85.2:8443: read: connection reset by peer
	I1101 09:21:03.032778  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:03.032833  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:03.071972  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:03.072009  216020 cri.go:89] found id: "f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:21:03.072015  216020 cri.go:89] found id: ""
	I1101 09:21:03.072023  216020 logs.go:282] 2 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2]
	I1101 09:21:03.072078  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:03.077222  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:03.081318  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:03.081392  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:03.113396  216020 cri.go:89] found id: ""
	I1101 09:21:03.113424  216020 logs.go:282] 0 containers: []
	W1101 09:21:03.113435  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:03.113442  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:03.113499  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:03.149060  216020 cri.go:89] found id: ""
	I1101 09:21:03.149090  216020 logs.go:282] 0 containers: []
	W1101 09:21:03.149099  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:03.149104  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:03.149148  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:03.183012  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:03.183038  216020 cri.go:89] found id: ""
	I1101 09:21:03.183048  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:03.183109  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:03.187539  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:03.187609  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:03.223255  216020 cri.go:89] found id: ""
	I1101 09:21:03.223287  216020 logs.go:282] 0 containers: []
	W1101 09:21:03.223298  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:03.223307  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:03.223380  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:03.263664  216020 cri.go:89] found id: "df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:03.263684  216020 cri.go:89] found id: ""
	I1101 09:21:03.263691  216020 logs.go:282] 1 containers: [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd]
	I1101 09:21:03.263745  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:03.268387  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:03.268458  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:03.298659  216020 cri.go:89] found id: ""
	I1101 09:21:03.298685  216020 logs.go:282] 0 containers: []
	W1101 09:21:03.298697  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:03.298704  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:03.298760  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:03.331908  216020 cri.go:89] found id: ""
	I1101 09:21:03.331952  216020 logs.go:282] 0 containers: []
	W1101 09:21:03.331963  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:03.331985  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:03.332000  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:03.371380  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:03.371411  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:03.492909  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:03.492943  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:03.535514  216020 logs.go:123] Gathering logs for kube-controller-manager [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd] ...
	I1101 09:21:03.535554  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:03.567725  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:03.567751  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:03.624454  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:03.624492  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:03.641942  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:03.641988  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:03.724699  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:03.724722  216020 logs.go:123] Gathering logs for kube-apiserver [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2] ...
	I1101 09:21:03.724739  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:21:03.762460  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:03.762492  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	
	
	==> CRI-O <==
	Nov 01 09:20:25 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:25.516312474Z" level=info msg="Created container 6cd6c8363476bd0a34deef513a204c6da2611045224adb8efc65c2778e33c742: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qjl6t/kubernetes-dashboard" id=8934bbf7-b7f1-408c-ac4a-e6acdf5446da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:25 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:25.517057832Z" level=info msg="Starting container: 6cd6c8363476bd0a34deef513a204c6da2611045224adb8efc65c2778e33c742" id=5ed41685-d974-46d8-b7e7-6515cd40b88f name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:20:25 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:25.519430139Z" level=info msg="Started container" PID=1705 containerID=6cd6c8363476bd0a34deef513a204c6da2611045224adb8efc65c2778e33c742 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qjl6t/kubernetes-dashboard id=5ed41685-d974-46d8-b7e7-6515cd40b88f name=/runtime.v1.RuntimeService/StartContainer sandboxID=2c7ddba118785d2deb1fc9673a117ff1b35ba7728aadfca5a95a4f48f62329af
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.163574837Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=27ff8c51-7c4d-4ff0-8e43-dcfea4b0460f name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.164576036Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b06a6613-7caa-4c25-ba21-61165e9288a4 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.165679763Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1ff6bd49-468f-47ae-9c10-617e48fffd0b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.165831982Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.171322821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.171532521Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2e436ef81431ba7cf3db6b233fcc814196c8653e4428b79b25f2ea3d552e74f6/merged/etc/passwd: no such file or directory"
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.171572584Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2e436ef81431ba7cf3db6b233fcc814196c8653e4428b79b25f2ea3d552e74f6/merged/etc/group: no such file or directory"
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.171930821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.199953394Z" level=info msg="Created container 77f34a751546cae898ab472655f56c5c32eccc085038636d42ef139d6abf16c6: kube-system/storage-provisioner/storage-provisioner" id=1ff6bd49-468f-47ae-9c10-617e48fffd0b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.200756258Z" level=info msg="Starting container: 77f34a751546cae898ab472655f56c5c32eccc085038636d42ef139d6abf16c6" id=165649da-dc73-441f-8857-37c9f13b58ae name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:20:38 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:38.202684475Z" level=info msg="Started container" PID=1727 containerID=77f34a751546cae898ab472655f56c5c32eccc085038636d42ef139d6abf16c6 description=kube-system/storage-provisioner/storage-provisioner id=165649da-dc73-441f-8857-37c9f13b58ae name=/runtime.v1.RuntimeService/StartContainer sandboxID=767963ee6ddc60b9fb8cd5aeb646be453ff51b86ada9b46b97faa7bd74f197ec
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.044637254Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b6d4af81-4ee4-4fac-a0d4-c0fd97817899 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.045576541Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5536f350-e180-4d53-904c-4ca9da79b95a name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.0466031Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs/dashboard-metrics-scraper" id=b182efb6-832e-4f64-9c24-5f56993671f1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.046773798Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.054073889Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.054755445Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.085859311Z" level=info msg="Created container 8270b89462e20b129ac3e5fdb15f4995bd39adc8b8af28d2aa2bfb55eb070be9: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs/dashboard-metrics-scraper" id=b182efb6-832e-4f64-9c24-5f56993671f1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.086568399Z" level=info msg="Starting container: 8270b89462e20b129ac3e5fdb15f4995bd39adc8b8af28d2aa2bfb55eb070be9" id=14f870ad-8a21-4589-8310-d1afca071fc4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.088668685Z" level=info msg="Started container" PID=1743 containerID=8270b89462e20b129ac3e5fdb15f4995bd39adc8b8af28d2aa2bfb55eb070be9 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs/dashboard-metrics-scraper id=14f870ad-8a21-4589-8310-d1afca071fc4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f32c0500baeb9bfc9c20637e5f7099b0e272f79f99dd595898529199f9021069
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.184978433Z" level=info msg="Removing container: 5e215f9c05cd829424f63ba914abecae250267dd476388a79b478303e529116e" id=31ebe799-0cc6-4136-86bc-c60091797e73 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:20:44 old-k8s-version-152344 crio[561]: time="2025-11-01T09:20:44.196079441Z" level=info msg="Removed container 5e215f9c05cd829424f63ba914abecae250267dd476388a79b478303e529116e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs/dashboard-metrics-scraper" id=31ebe799-0cc6-4136-86bc-c60091797e73 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	8270b89462e20       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago       Exited              dashboard-metrics-scraper   2                   f32c0500baeb9       dashboard-metrics-scraper-5f989dc9cf-j7gbs       kubernetes-dashboard
	77f34a751546c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   767963ee6ddc6       storage-provisioner                              kube-system
	6cd6c8363476b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago       Running             kubernetes-dashboard        0                   2c7ddba118785       kubernetes-dashboard-8694d4445c-qjl6t            kubernetes-dashboard
	4c720398bb4fa       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           58 seconds ago       Running             coredns                     0                   b0ddd4ebafc34       coredns-5dd5756b68-gcvgr                         kube-system
	8608b0f7e0e07       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   02eb1388f086c       busybox                                          default
	5b6c9b666836f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   767963ee6ddc6       storage-provisioner                              kube-system
	0e646a0053e64       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           58 seconds ago       Running             kube-proxy                  0                   965c4dffa077c       kube-proxy-w5hpl                                 kube-system
	a4a5247c122cf       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           58 seconds ago       Running             kindnet-cni                 0                   8e807dda15af0       kindnet-9lbnx                                    kube-system
	b3160f6c872bd       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   de447a4a537e9       etcd-old-k8s-version-152344                      kube-system
	872d888710045       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   eca798aec45c8       kube-apiserver-old-k8s-version-152344            kube-system
	a606cb94dcd3a       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   95056a4e7e624       kube-controller-manager-old-k8s-version-152344   kube-system
	e85dd09f3807f       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   d4cf3d87b578c       kube-scheduler-old-k8s-version-152344            kube-system
	
	
	==> coredns [4c720398bb4fa757d679305ceeef216ff753830d7ac6b7c1e25b129c46813c05] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39631 - 6829 "HINFO IN 5249536083815428656.6825250384916931008. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.492555733s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-152344
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-152344
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=old-k8s-version-152344
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_19_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:18:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-152344
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:20:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:20:37 +0000   Sat, 01 Nov 2025 09:18:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:20:37 +0000   Sat, 01 Nov 2025 09:18:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:20:37 +0000   Sat, 01 Nov 2025 09:18:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:20:37 +0000   Sat, 01 Nov 2025 09:19:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-152344
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2997294f-a5eb-4a19-8c2c-94960c03c89f
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-5dd5756b68-gcvgr                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-old-k8s-version-152344                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m6s
	  kube-system                 kindnet-9lbnx                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-old-k8s-version-152344             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-controller-manager-old-k8s-version-152344    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-w5hpl                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-old-k8s-version-152344             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-j7gbs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-qjl6t             0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 111s                   kube-proxy       
	  Normal  Starting                 58s                    kube-proxy       
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m11s (x9 over 2m12s)  kubelet          Node old-k8s-version-152344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m12s)  kubelet          Node old-k8s-version-152344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x7 over 2m12s)  kubelet          Node old-k8s-version-152344 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m6s                   kubelet          Node old-k8s-version-152344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m6s                   kubelet          Node old-k8s-version-152344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m6s                   kubelet          Node old-k8s-version-152344 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m6s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           114s                   node-controller  Node old-k8s-version-152344 event: Registered Node old-k8s-version-152344 in Controller
	  Normal  NodeReady                100s                   kubelet          Node old-k8s-version-152344 status is now: NodeReady
	  Normal  Starting                 62s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node old-k8s-version-152344 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node old-k8s-version-152344 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node old-k8s-version-152344 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                    node-controller  Node old-k8s-version-152344 event: Registered Node old-k8s-version-152344 in Controller
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.047726] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[Nov 1 08:38] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.215674] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.047939] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000023] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023915] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023867] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023880] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000033] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023884] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +4.031524] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +8.255123] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	
	
	==> etcd [b3160f6c872bd74ec717f3bdc6505f4d22b0d84f6f4142f3e0529df91607f430] <==
	{"level":"info","ts":"2025-11-01T09:20:04.630536Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-11-01T09:20:04.630659Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:20:04.63069Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:20:04.630817Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T09:20:04.631063Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T09:20:04.631065Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-01T09:20:04.631091Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T09:20:04.631104Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-01T09:20:04.631478Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T09:20:04.631509Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T09:20:04.631519Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T09:20:05.818797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T09:20:05.818912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T09:20:05.818929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-11-01T09:20:05.818941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T09:20:05.818947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-11-01T09:20:05.818955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-11-01T09:20:05.818962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-11-01T09:20:05.820699Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-152344 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T09:20:05.820691Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:20:05.820745Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:20:05.821047Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T09:20:05.821072Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-01T09:20:05.82245Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T09:20:05.823036Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 09:21:06 up  1:03,  0 user,  load average: 2.81, 2.49, 1.53
	Linux old-k8s-version-152344 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a4a5247c122cf0b4dc4fa5a348dbacca1caae7a3afc1116e75392ae7a3ec6dd2] <==
	I1101 09:20:07.675406       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:20:07.675761       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1101 09:20:07.675950       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:20:07.675974       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:20:07.676008       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:20:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:20:07.953472       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:20:07.953497       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:20:07.953508       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:20:08.046165       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:20:08.453848       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:20:08.453992       1 metrics.go:72] Registering metrics
	I1101 09:20:08.454113       1 controller.go:711] "Syncing nftables rules"
	I1101 09:20:17.952939       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:20:17.953001       1 main.go:301] handling current node
	I1101 09:20:27.953089       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:20:27.953124       1 main.go:301] handling current node
	I1101 09:20:37.953030       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:20:37.953067       1 main.go:301] handling current node
	I1101 09:20:47.953082       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:20:47.953135       1 main.go:301] handling current node
	I1101 09:20:57.958982       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:20:57.959018       1 main.go:301] handling current node
	
	
	==> kube-apiserver [872d88871004564b9fec7c298dff556377a0ff948066c78e7c1b182d8b6271f4] <==
	I1101 09:20:06.846715       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 09:20:06.848214       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:20:06.892091       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 09:20:06.892021       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:20:06.892374       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 09:20:06.892432       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 09:20:06.892451       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 09:20:06.892454       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 09:20:06.894411       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 09:20:06.894447       1 aggregator.go:166] initial CRD sync complete...
	I1101 09:20:06.894455       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 09:20:06.894461       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:20:06.894468       1 cache.go:39] Caches are synced for autoregister controller
	E1101 09:20:06.897719       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:20:07.795598       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:20:08.011808       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 09:20:08.067192       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 09:20:08.108133       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:20:08.118057       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:20:08.127952       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 09:20:08.180017       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.192.95"}
	I1101 09:20:08.196762       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.150.241"}
	I1101 09:20:19.045750       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 09:20:19.175886       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 09:20:19.271108       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a606cb94dcd3aedf1e555a4a15b3aa6ab1c1fbfea9bfe61bea7fb5777b59cb5a] <==
	I1101 09:20:19.201995       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1101 09:20:19.203134       1 shared_informer.go:318] Caches are synced for daemon sets
	I1101 09:20:19.206855       1 shared_informer.go:318] Caches are synced for TTL
	I1101 09:20:19.210676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="28.085097ms"
	I1101 09:20:19.218405       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="35.229798ms"
	I1101 09:20:19.221759       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 09:20:19.223786       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.992128ms"
	I1101 09:20:19.231169       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.323894ms"
	I1101 09:20:19.231167       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="12.638865ms"
	I1101 09:20:19.231314       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.933µs"
	I1101 09:20:19.238919       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="90.105µs"
	I1101 09:20:19.252214       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.553µs"
	I1101 09:20:19.296081       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1101 09:20:19.636541       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 09:20:19.654648       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 09:20:19.654683       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 09:20:22.133252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.079µs"
	I1101 09:20:23.141628       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.365µs"
	I1101 09:20:24.142647       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.398µs"
	I1101 09:20:26.156281       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.209306ms"
	I1101 09:20:26.156411       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.223µs"
	I1101 09:20:44.196967       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="88.071µs"
	I1101 09:20:47.234625       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.154737ms"
	I1101 09:20:47.234763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.034µs"
	I1101 09:20:49.526702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.53µs"
	
	
	==> kube-proxy [0e646a0053e644e0c8aaf75d4f21ac7cb97c0c1c5004411b54aea626f1cb6948] <==
	I1101 09:20:07.567348       1 server_others.go:69] "Using iptables proxy"
	I1101 09:20:07.583432       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1101 09:20:07.609424       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:20:07.612699       1 server_others.go:152] "Using iptables Proxier"
	I1101 09:20:07.612857       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 09:20:07.612924       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 09:20:07.612974       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 09:20:07.613293       1 server.go:846] "Version info" version="v1.28.0"
	I1101 09:20:07.613429       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:20:07.614514       1 config.go:188] "Starting service config controller"
	I1101 09:20:07.615389       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 09:20:07.615135       1 config.go:315] "Starting node config controller"
	I1101 09:20:07.615516       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 09:20:07.615322       1 config.go:97] "Starting endpoint slice config controller"
	I1101 09:20:07.615528       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 09:20:07.716271       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 09:20:07.716331       1 shared_informer.go:318] Caches are synced for service config
	I1101 09:20:07.716652       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e85dd09f3807f74307934a4f301776d16ac8f60eb9cc757e6f6bf2553901baef] <==
	I1101 09:20:05.257332       1 serving.go:348] Generated self-signed cert in-memory
	W1101 09:20:06.811478       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:20:06.811512       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:20:06.811525       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:20:06.811534       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:20:06.841889       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1101 09:20:06.841932       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:20:06.843791       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:20:06.843879       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 09:20:06.845852       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1101 09:20:06.845972       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 09:20:06.944492       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 09:20:19 old-k8s-version-152344 kubelet[718]: I1101 09:20:19.333382     718 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/827e6d08-5ed0-451b-84d3-91922812871c-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-qjl6t\" (UID: \"827e6d08-5ed0-451b-84d3-91922812871c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qjl6t"
	Nov 01 09:20:19 old-k8s-version-152344 kubelet[718]: I1101 09:20:19.333445     718 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/44708f58-737c-4f77-af56-0cee23c3d247-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-j7gbs\" (UID: \"44708f58-737c-4f77-af56-0cee23c3d247\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs"
	Nov 01 09:20:19 old-k8s-version-152344 kubelet[718]: I1101 09:20:19.333482     718 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sv74\" (UniqueName: \"kubernetes.io/projected/827e6d08-5ed0-451b-84d3-91922812871c-kube-api-access-5sv74\") pod \"kubernetes-dashboard-8694d4445c-qjl6t\" (UID: \"827e6d08-5ed0-451b-84d3-91922812871c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qjl6t"
	Nov 01 09:20:19 old-k8s-version-152344 kubelet[718]: I1101 09:20:19.333515     718 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5pqh\" (UniqueName: \"kubernetes.io/projected/44708f58-737c-4f77-af56-0cee23c3d247-kube-api-access-j5pqh\") pod \"dashboard-metrics-scraper-5f989dc9cf-j7gbs\" (UID: \"44708f58-737c-4f77-af56-0cee23c3d247\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs"
	Nov 01 09:20:22 old-k8s-version-152344 kubelet[718]: I1101 09:20:22.120527     718 scope.go:117] "RemoveContainer" containerID="f84d53fdd254a7b97e86cde778ebec6d0fb72e4efc277a406e88166a9e1197b8"
	Nov 01 09:20:23 old-k8s-version-152344 kubelet[718]: I1101 09:20:23.125599     718 scope.go:117] "RemoveContainer" containerID="f84d53fdd254a7b97e86cde778ebec6d0fb72e4efc277a406e88166a9e1197b8"
	Nov 01 09:20:23 old-k8s-version-152344 kubelet[718]: I1101 09:20:23.126094     718 scope.go:117] "RemoveContainer" containerID="5e215f9c05cd829424f63ba914abecae250267dd476388a79b478303e529116e"
	Nov 01 09:20:23 old-k8s-version-152344 kubelet[718]: E1101 09:20:23.126457     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-j7gbs_kubernetes-dashboard(44708f58-737c-4f77-af56-0cee23c3d247)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs" podUID="44708f58-737c-4f77-af56-0cee23c3d247"
	Nov 01 09:20:24 old-k8s-version-152344 kubelet[718]: I1101 09:20:24.131056     718 scope.go:117] "RemoveContainer" containerID="5e215f9c05cd829424f63ba914abecae250267dd476388a79b478303e529116e"
	Nov 01 09:20:24 old-k8s-version-152344 kubelet[718]: E1101 09:20:24.131421     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-j7gbs_kubernetes-dashboard(44708f58-737c-4f77-af56-0cee23c3d247)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs" podUID="44708f58-737c-4f77-af56-0cee23c3d247"
	Nov 01 09:20:29 old-k8s-version-152344 kubelet[718]: I1101 09:20:29.515580     718 scope.go:117] "RemoveContainer" containerID="5e215f9c05cd829424f63ba914abecae250267dd476388a79b478303e529116e"
	Nov 01 09:20:29 old-k8s-version-152344 kubelet[718]: E1101 09:20:29.516036     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-j7gbs_kubernetes-dashboard(44708f58-737c-4f77-af56-0cee23c3d247)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs" podUID="44708f58-737c-4f77-af56-0cee23c3d247"
	Nov 01 09:20:38 old-k8s-version-152344 kubelet[718]: I1101 09:20:38.163030     718 scope.go:117] "RemoveContainer" containerID="5b6c9b666836f422d8078161076c25e4166a0975599c4a826a525cd241adf6aa"
	Nov 01 09:20:38 old-k8s-version-152344 kubelet[718]: I1101 09:20:38.175457     718 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-qjl6t" podStartSLOduration=13.251694977 podCreationTimestamp="2025-11-01 09:20:19 +0000 UTC" firstStartedPulling="2025-11-01 09:20:19.546658819 +0000 UTC m=+15.610599029" lastFinishedPulling="2025-11-01 09:20:25.470311403 +0000 UTC m=+21.534251611" observedRunningTime="2025-11-01 09:20:26.149835904 +0000 UTC m=+22.213776130" watchObservedRunningTime="2025-11-01 09:20:38.175347559 +0000 UTC m=+34.239287786"
	Nov 01 09:20:44 old-k8s-version-152344 kubelet[718]: I1101 09:20:44.043991     718 scope.go:117] "RemoveContainer" containerID="5e215f9c05cd829424f63ba914abecae250267dd476388a79b478303e529116e"
	Nov 01 09:20:44 old-k8s-version-152344 kubelet[718]: I1101 09:20:44.183743     718 scope.go:117] "RemoveContainer" containerID="5e215f9c05cd829424f63ba914abecae250267dd476388a79b478303e529116e"
	Nov 01 09:20:44 old-k8s-version-152344 kubelet[718]: I1101 09:20:44.184044     718 scope.go:117] "RemoveContainer" containerID="8270b89462e20b129ac3e5fdb15f4995bd39adc8b8af28d2aa2bfb55eb070be9"
	Nov 01 09:20:44 old-k8s-version-152344 kubelet[718]: E1101 09:20:44.184425     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-j7gbs_kubernetes-dashboard(44708f58-737c-4f77-af56-0cee23c3d247)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs" podUID="44708f58-737c-4f77-af56-0cee23c3d247"
	Nov 01 09:20:49 old-k8s-version-152344 kubelet[718]: I1101 09:20:49.515730     718 scope.go:117] "RemoveContainer" containerID="8270b89462e20b129ac3e5fdb15f4995bd39adc8b8af28d2aa2bfb55eb070be9"
	Nov 01 09:20:49 old-k8s-version-152344 kubelet[718]: E1101 09:20:49.516070     718 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-j7gbs_kubernetes-dashboard(44708f58-737c-4f77-af56-0cee23c3d247)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-j7gbs" podUID="44708f58-737c-4f77-af56-0cee23c3d247"
	Nov 01 09:21:00 old-k8s-version-152344 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:21:00 old-k8s-version-152344 kubelet[718]: I1101 09:21:00.855471     718 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 01 09:21:00 old-k8s-version-152344 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:21:00 old-k8s-version-152344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:21:00 old-k8s-version-152344 systemd[1]: kubelet.service: Consumed 1.662s CPU time.
	
	
	==> kubernetes-dashboard [6cd6c8363476bd0a34deef513a204c6da2611045224adb8efc65c2778e33c742] <==
	2025/11/01 09:20:25 Starting overwatch
	2025/11/01 09:20:25 Using namespace: kubernetes-dashboard
	2025/11/01 09:20:25 Using in-cluster config to connect to apiserver
	2025/11/01 09:20:25 Using secret token for csrf signing
	2025/11/01 09:20:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:20:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:20:25 Successful initial request to the apiserver, version: v1.28.0
	2025/11/01 09:20:25 Generating JWE encryption key
	2025/11/01 09:20:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:20:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:20:26 Initializing JWE encryption key from synchronized object
	2025/11/01 09:20:26 Creating in-cluster Sidecar client
	2025/11/01 09:20:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:20:26 Serving insecurely on HTTP port: 9090
	2025/11/01 09:20:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [5b6c9b666836f422d8078161076c25e4166a0975599c4a826a525cd241adf6aa] <==
	I1101 09:20:07.487657       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:20:37.495639       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [77f34a751546cae898ab472655f56c5c32eccc085038636d42ef139d6abf16c6] <==
	I1101 09:20:38.216005       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:20:38.224919       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:20:38.224984       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 09:20:55.622291       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:20:55.622477       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-152344_03ce407e-ed10-49cd-badc-9e9a44d78715!
	I1101 09:20:55.622440       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d54d713e-3c09-4409-a20a-85838e16fc43", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-152344_03ce407e-ed10-49cd-badc-9e9a44d78715 became leader
	I1101 09:20:55.723429       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-152344_03ce407e-ed10-49cd-badc-9e9a44d78715!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-152344 -n old-k8s-version-152344
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-152344 -n old-k8s-version-152344: exit status 2 (502.842545ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-152344 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-397460 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-397460 --alsologtostderr -v=1: exit status 80 (2.598632934s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-397460 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:21:03.416618  257542 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:21:03.416795  257542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:21:03.416807  257542 out.go:374] Setting ErrFile to fd 2...
	I1101 09:21:03.416814  257542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:21:03.417073  257542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:21:03.417420  257542 out.go:368] Setting JSON to false
	I1101 09:21:03.417468  257542 mustload.go:66] Loading cluster: no-preload-397460
	I1101 09:21:03.417821  257542 config.go:182] Loaded profile config "no-preload-397460": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:03.418303  257542 cli_runner.go:164] Run: docker container inspect no-preload-397460 --format={{.State.Status}}
	I1101 09:21:03.439366  257542 host.go:66] Checking if "no-preload-397460" exists ...
	I1101 09:21:03.439724  257542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:21:03.513006  257542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-01 09:21:03.499658824 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:21:03.514001  257542 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-397460 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 09:21:03.516070  257542 out.go:179] * Pausing node no-preload-397460 ... 
	I1101 09:21:03.518267  257542 host.go:66] Checking if "no-preload-397460" exists ...
	I1101 09:21:03.518637  257542 ssh_runner.go:195] Run: systemctl --version
	I1101 09:21:03.518721  257542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-397460
	I1101 09:21:03.543673  257542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/no-preload-397460/id_rsa Username:docker}
	I1101 09:21:03.647162  257542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:21:03.662689  257542 pause.go:52] kubelet running: true
	I1101 09:21:03.662769  257542 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:21:03.865391  257542 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:21:03.865550  257542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:21:03.946949  257542 cri.go:89] found id: "a32fea6f71cfef9621e6124cb4e8c00bce86e68063a63d6eead05eb0b30c3fd2"
	I1101 09:21:03.946972  257542 cri.go:89] found id: "7f8c011365813d9b9a3fc4de470f15a11f84b8fc7448785cac2980d200ab6328"
	I1101 09:21:03.946976  257542 cri.go:89] found id: "e68d2c72f2a280bb06520d0abda5c4e4af21e514107ea8a34112ce09ad3363e9"
	I1101 09:21:03.946980  257542 cri.go:89] found id: "4e0ef8237ebe095c9933e907c2062eaeb0996060e1bec0bf09d7e957557ac6f6"
	I1101 09:21:03.946992  257542 cri.go:89] found id: "60a457721a2e92a962fa3f6f4b3be7038d081f4c316d20147a3d72dd30c7da7e"
	I1101 09:21:03.946999  257542 cri.go:89] found id: "fd6c2e567397890a7f512b409824d618cc40968b819d316cdae2a59eaaeee805"
	I1101 09:21:03.947003  257542 cri.go:89] found id: "1e65eafe05118922eba4075b65156c842b7bb2e5dc4b74d48586e74e8830e4ad"
	I1101 09:21:03.947007  257542 cri.go:89] found id: "1b44261dff64d3c4c47d37a621f72a585d7a917cf393d156278c5cbcf49d2100"
	I1101 09:21:03.947011  257542 cri.go:89] found id: "a154077d09e972273696e9d1d20b891c240a792171f425c23b57e8599069bf1b"
	I1101 09:21:03.947019  257542 cri.go:89] found id: "6dc0e61dffecc42ec75132038f2cd325ebcc36b066eb79f9d644c0a144d6656c"
	I1101 09:21:03.947023  257542 cri.go:89] found id: "6fcf14948dd8fe2f7f1fbb5fe1c692b7d291220499560edc384c983cbef16d2e"
	I1101 09:21:03.947028  257542 cri.go:89] found id: ""
	I1101 09:21:03.947072  257542 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:21:03.961316  257542 retry.go:31] will retry after 355.551617ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:21:03Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:21:04.317985  257542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:21:04.332213  257542 pause.go:52] kubelet running: false
	I1101 09:21:04.332279  257542 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:21:04.492843  257542 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:21:04.492960  257542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:21:04.570532  257542 cri.go:89] found id: "a32fea6f71cfef9621e6124cb4e8c00bce86e68063a63d6eead05eb0b30c3fd2"
	I1101 09:21:04.570560  257542 cri.go:89] found id: "7f8c011365813d9b9a3fc4de470f15a11f84b8fc7448785cac2980d200ab6328"
	I1101 09:21:04.570565  257542 cri.go:89] found id: "e68d2c72f2a280bb06520d0abda5c4e4af21e514107ea8a34112ce09ad3363e9"
	I1101 09:21:04.570569  257542 cri.go:89] found id: "4e0ef8237ebe095c9933e907c2062eaeb0996060e1bec0bf09d7e957557ac6f6"
	I1101 09:21:04.570572  257542 cri.go:89] found id: "60a457721a2e92a962fa3f6f4b3be7038d081f4c316d20147a3d72dd30c7da7e"
	I1101 09:21:04.570575  257542 cri.go:89] found id: "fd6c2e567397890a7f512b409824d618cc40968b819d316cdae2a59eaaeee805"
	I1101 09:21:04.570579  257542 cri.go:89] found id: "1e65eafe05118922eba4075b65156c842b7bb2e5dc4b74d48586e74e8830e4ad"
	I1101 09:21:04.570583  257542 cri.go:89] found id: "1b44261dff64d3c4c47d37a621f72a585d7a917cf393d156278c5cbcf49d2100"
	I1101 09:21:04.570588  257542 cri.go:89] found id: "a154077d09e972273696e9d1d20b891c240a792171f425c23b57e8599069bf1b"
	I1101 09:21:04.570596  257542 cri.go:89] found id: "6dc0e61dffecc42ec75132038f2cd325ebcc36b066eb79f9d644c0a144d6656c"
	I1101 09:21:04.570601  257542 cri.go:89] found id: "6fcf14948dd8fe2f7f1fbb5fe1c692b7d291220499560edc384c983cbef16d2e"
	I1101 09:21:04.570605  257542 cri.go:89] found id: ""
	I1101 09:21:04.570659  257542 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:21:04.586048  257542 retry.go:31] will retry after 471.85708ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:21:04Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:21:05.058463  257542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:21:05.075950  257542 pause.go:52] kubelet running: false
	I1101 09:21:05.076028  257542 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:21:05.247225  257542 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:21:05.247322  257542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:21:05.330734  257542 cri.go:89] found id: "a32fea6f71cfef9621e6124cb4e8c00bce86e68063a63d6eead05eb0b30c3fd2"
	I1101 09:21:05.330760  257542 cri.go:89] found id: "7f8c011365813d9b9a3fc4de470f15a11f84b8fc7448785cac2980d200ab6328"
	I1101 09:21:05.330767  257542 cri.go:89] found id: "e68d2c72f2a280bb06520d0abda5c4e4af21e514107ea8a34112ce09ad3363e9"
	I1101 09:21:05.330772  257542 cri.go:89] found id: "4e0ef8237ebe095c9933e907c2062eaeb0996060e1bec0bf09d7e957557ac6f6"
	I1101 09:21:05.330776  257542 cri.go:89] found id: "60a457721a2e92a962fa3f6f4b3be7038d081f4c316d20147a3d72dd30c7da7e"
	I1101 09:21:05.330780  257542 cri.go:89] found id: "fd6c2e567397890a7f512b409824d618cc40968b819d316cdae2a59eaaeee805"
	I1101 09:21:05.330783  257542 cri.go:89] found id: "1e65eafe05118922eba4075b65156c842b7bb2e5dc4b74d48586e74e8830e4ad"
	I1101 09:21:05.330785  257542 cri.go:89] found id: "1b44261dff64d3c4c47d37a621f72a585d7a917cf393d156278c5cbcf49d2100"
	I1101 09:21:05.330788  257542 cri.go:89] found id: "a154077d09e972273696e9d1d20b891c240a792171f425c23b57e8599069bf1b"
	I1101 09:21:05.330794  257542 cri.go:89] found id: "6dc0e61dffecc42ec75132038f2cd325ebcc36b066eb79f9d644c0a144d6656c"
	I1101 09:21:05.330799  257542 cri.go:89] found id: "6fcf14948dd8fe2f7f1fbb5fe1c692b7d291220499560edc384c983cbef16d2e"
	I1101 09:21:05.330803  257542 cri.go:89] found id: ""
	I1101 09:21:05.330849  257542 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:21:05.346465  257542 retry.go:31] will retry after 297.386196ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:21:05Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:21:05.644458  257542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:21:05.660212  257542 pause.go:52] kubelet running: false
	I1101 09:21:05.660275  257542 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:21:05.831733  257542 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:21:05.831801  257542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:21:05.916645  257542 cri.go:89] found id: "a32fea6f71cfef9621e6124cb4e8c00bce86e68063a63d6eead05eb0b30c3fd2"
	I1101 09:21:05.916698  257542 cri.go:89] found id: "7f8c011365813d9b9a3fc4de470f15a11f84b8fc7448785cac2980d200ab6328"
	I1101 09:21:05.916704  257542 cri.go:89] found id: "e68d2c72f2a280bb06520d0abda5c4e4af21e514107ea8a34112ce09ad3363e9"
	I1101 09:21:05.916708  257542 cri.go:89] found id: "4e0ef8237ebe095c9933e907c2062eaeb0996060e1bec0bf09d7e957557ac6f6"
	I1101 09:21:05.916711  257542 cri.go:89] found id: "60a457721a2e92a962fa3f6f4b3be7038d081f4c316d20147a3d72dd30c7da7e"
	I1101 09:21:05.916716  257542 cri.go:89] found id: "fd6c2e567397890a7f512b409824d618cc40968b819d316cdae2a59eaaeee805"
	I1101 09:21:05.916719  257542 cri.go:89] found id: "1e65eafe05118922eba4075b65156c842b7bb2e5dc4b74d48586e74e8830e4ad"
	I1101 09:21:05.916723  257542 cri.go:89] found id: "1b44261dff64d3c4c47d37a621f72a585d7a917cf393d156278c5cbcf49d2100"
	I1101 09:21:05.916726  257542 cri.go:89] found id: "a154077d09e972273696e9d1d20b891c240a792171f425c23b57e8599069bf1b"
	I1101 09:21:05.916736  257542 cri.go:89] found id: "6dc0e61dffecc42ec75132038f2cd325ebcc36b066eb79f9d644c0a144d6656c"
	I1101 09:21:05.916741  257542 cri.go:89] found id: "6fcf14948dd8fe2f7f1fbb5fe1c692b7d291220499560edc384c983cbef16d2e"
	I1101 09:21:05.916744  257542 cri.go:89] found id: ""
	I1101 09:21:05.916805  257542 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:21:05.932426  257542 out.go:203] 
	W1101 09:21:05.934081  257542 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:21:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:21:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:21:05.934105  257542 out.go:285] * 
	* 
	W1101 09:21:05.938894  257542 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:21:05.941673  257542 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-397460 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-397460
helpers_test.go:243: (dbg) docker inspect no-preload-397460:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a",
	        "Created": "2025-11-01T09:18:48.77329288Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 249262,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:20:08.166757565Z",
	            "FinishedAt": "2025-11-01T09:20:07.010214475Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a/hosts",
	        "LogPath": "/var/lib/docker/containers/dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a/dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a-json.log",
	        "Name": "/no-preload-397460",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-397460:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-397460",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a",
	                "LowerDir": "/var/lib/docker/overlay2/0b34dab8141c8641f76f199b5dd54ea0b7163a5882ccc5e46e7cd5e259fdb760-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0b34dab8141c8641f76f199b5dd54ea0b7163a5882ccc5e46e7cd5e259fdb760/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0b34dab8141c8641f76f199b5dd54ea0b7163a5882ccc5e46e7cd5e259fdb760/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0b34dab8141c8641f76f199b5dd54ea0b7163a5882ccc5e46e7cd5e259fdb760/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-397460",
	                "Source": "/var/lib/docker/volumes/no-preload-397460/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-397460",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-397460",
	                "name.minikube.sigs.k8s.io": "no-preload-397460",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6820c3172cb4737b686620bb279efbd06b0f6b816473a97ed9eae5815e4fc5bf",
	            "SandboxKey": "/var/run/docker/netns/6820c3172cb4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-397460": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:ea:ca:6f:51:99",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc24cbf1ada0b118eca4d07595495e7c99849b988767800f68d20e97764309c9",
	                    "EndpointID": "cdf70b272ddb87e42cb3c427a1e3923f479097ad8a282bb886cd88f5c348ab81",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-397460",
	                        "dcacf8ef764d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397460 -n no-preload-397460
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397460 -n no-preload-397460: exit status 2 (391.622166ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-397460 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-397460 logs -n 25: (1.565427342s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-204434 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ ssh     │ -p cilium-204434 sudo crio config                                                                                                                                                                                                             │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ delete  │ -p cilium-204434                                                                                                                                                                                                                              │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:18 UTC │
	│ start   │ -p old-k8s-version-152344 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:19 UTC │
	│ delete  │ -p running-upgrade-274843                                                                                                                                                                                                                     │ running-upgrade-274843 │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:18 UTC │
	│ start   │ -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-152344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ stop    │ -p old-k8s-version-152344 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ start   │ -p cert-expiration-303094 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-303094 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-397460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ delete  │ -p cert-expiration-303094                                                                                                                                                                                                                     │ cert-expiration-303094 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ stop    │ -p no-preload-397460 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p embed-certs-236314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-152344 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ start   │ -p old-k8s-version-152344 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p no-preload-397460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-236314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │                     │
	│ stop    │ -p embed-certs-236314 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-236314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p embed-certs-236314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │                     │
	│ image   │ old-k8s-version-152344 image list --format=json                                                                                                                                                                                               │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p old-k8s-version-152344 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ image   │ no-preload-397460 image list --format=json                                                                                                                                                                                                    │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p no-preload-397460 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:20:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:20:59.428556  256247 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:20:59.428818  256247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:20:59.428829  256247 out.go:374] Setting ErrFile to fd 2...
	I1101 09:20:59.428834  256247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:20:59.429086  256247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:20:59.429532  256247 out.go:368] Setting JSON to false
	I1101 09:20:59.430786  256247 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3807,"bootTime":1761985052,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:20:59.430894  256247 start.go:143] virtualization: kvm guest
	I1101 09:20:59.433043  256247 out.go:179] * [embed-certs-236314] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:20:59.434385  256247 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:20:59.434401  256247 notify.go:221] Checking for updates...
	I1101 09:20:59.436934  256247 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:20:59.438200  256247 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:20:59.439375  256247 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:20:59.440741  256247 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:20:59.441964  256247 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:20:59.443571  256247 config.go:182] Loaded profile config "embed-certs-236314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:20:59.444071  256247 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:20:59.469169  256247 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:20:59.469258  256247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:20:59.527156  256247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 09:20:59.517141551 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:20:59.527271  256247 docker.go:319] overlay module found
	I1101 09:20:59.529297  256247 out.go:179] * Using the docker driver based on existing profile
	I1101 09:20:59.530647  256247 start.go:309] selected driver: docker
	I1101 09:20:59.530667  256247 start.go:930] validating driver "docker" against &{Name:embed-certs-236314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-236314 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:20:59.530767  256247 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:20:59.531375  256247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:20:59.591821  256247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 09:20:59.581513893 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:20:59.592141  256247 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:20:59.592171  256247 cni.go:84] Creating CNI manager for ""
	I1101 09:20:59.592221  256247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:20:59.592272  256247 start.go:353] cluster config:
	{Name:embed-certs-236314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-236314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:20:59.594010  256247 out.go:179] * Starting "embed-certs-236314" primary control-plane node in "embed-certs-236314" cluster
	I1101 09:20:59.595373  256247 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:20:59.596730  256247 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:20:59.598034  256247 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:20:59.598070  256247 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:20:59.598094  256247 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:20:59.598109  256247 cache.go:59] Caching tarball of preloaded images
	I1101 09:20:59.598199  256247 preload.go:233] Found /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:20:59.598213  256247 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:20:59.598344  256247 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/embed-certs-236314/config.json ...
	I1101 09:20:59.620135  256247 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:20:59.620159  256247 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:20:59.620175  256247 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:20:59.620198  256247 start.go:360] acquireMachinesLock for embed-certs-236314: {Name:mk8eda201f80ebfb2f2bb01891a2b839f76263b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:20:59.620253  256247 start.go:364] duration metric: took 37.33µs to acquireMachinesLock for "embed-certs-236314"
	I1101 09:20:59.620271  256247 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:20:59.620276  256247 fix.go:54] fixHost starting: 
	I1101 09:20:59.620535  256247 cli_runner.go:164] Run: docker container inspect embed-certs-236314 --format={{.State.Status}}
	I1101 09:20:59.639500  256247 fix.go:112] recreateIfNeeded on embed-certs-236314: state=Stopped err=<nil>
	W1101 09:20:59.639552  256247 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:20:58.468250  216020 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.063103618s)
	W1101 09:20:58.468297  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1101 09:20:58.468307  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:20:58.468322  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:20:59.641465  256247 out.go:252] * Restarting existing docker container for "embed-certs-236314" ...
	I1101 09:20:59.641539  256247 cli_runner.go:164] Run: docker start embed-certs-236314
	I1101 09:20:59.902231  256247 cli_runner.go:164] Run: docker container inspect embed-certs-236314 --format={{.State.Status}}
	I1101 09:20:59.922418  256247 kic.go:430] container "embed-certs-236314" state is running.
	I1101 09:20:59.922774  256247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-236314
	I1101 09:20:59.942287  256247 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/embed-certs-236314/config.json ...
	I1101 09:20:59.942510  256247 machine.go:94] provisionDockerMachine start ...
	I1101 09:20:59.942564  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:20:59.962745  256247 main.go:143] libmachine: Using SSH client type: native
	I1101 09:20:59.963041  256247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 09:20:59.963056  256247 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:20:59.963759  256247 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36770->127.0.0.1:33078: read: connection reset by peer
	I1101 09:21:03.117563  256247 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-236314
	
	I1101 09:21:03.117594  256247 ubuntu.go:182] provisioning hostname "embed-certs-236314"
	I1101 09:21:03.117656  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:03.142432  256247 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:03.142735  256247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 09:21:03.142750  256247 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-236314 && echo "embed-certs-236314" | sudo tee /etc/hostname
	I1101 09:21:03.305920  256247 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-236314
	
	I1101 09:21:03.306024  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:03.331383  256247 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:03.331747  256247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 09:21:03.331779  256247 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-236314' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-236314/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-236314' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:21:03.490967  256247 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:21:03.491130  256247 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5913/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5913/.minikube}
	I1101 09:21:03.491284  256247 ubuntu.go:190] setting up certificates
	I1101 09:21:03.491299  256247 provision.go:84] configureAuth start
	I1101 09:21:03.491599  256247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-236314
	I1101 09:21:03.518311  256247 provision.go:143] copyHostCerts
	I1101 09:21:03.518365  256247 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem, removing ...
	I1101 09:21:03.518385  256247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem
	I1101 09:21:03.518461  256247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem (1078 bytes)
	I1101 09:21:03.518591  256247 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem, removing ...
	I1101 09:21:03.518605  256247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem
	I1101 09:21:03.518646  256247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem (1123 bytes)
	I1101 09:21:03.518803  256247 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem, removing ...
	I1101 09:21:03.518819  256247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem
	I1101 09:21:03.518904  256247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem (1675 bytes)
	I1101 09:21:03.519019  256247 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem org=jenkins.embed-certs-236314 san=[127.0.0.1 192.168.76.2 embed-certs-236314 localhost minikube]
	I1101 09:21:03.680093  256247 provision.go:177] copyRemoteCerts
	I1101 09:21:03.680157  256247 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:21:03.680200  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:03.704893  256247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/embed-certs-236314/id_rsa Username:docker}
	I1101 09:21:03.816511  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:21:03.837369  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 09:21:03.857322  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:21:03.878326  256247 provision.go:87] duration metric: took 387.013499ms to configureAuth
	I1101 09:21:03.878358  256247 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:21:03.878577  256247 config.go:182] Loaded profile config "embed-certs-236314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:03.878713  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:03.901022  256247 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:03.901326  256247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 09:21:03.901362  256247 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:21:04.244254  256247 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:21:04.244281  256247 machine.go:97] duration metric: took 4.30175559s to provisionDockerMachine
	I1101 09:21:04.244294  256247 start.go:293] postStartSetup for "embed-certs-236314" (driver="docker")
	I1101 09:21:04.244307  256247 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:21:04.244385  256247 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:21:04.244488  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:04.266806  256247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/embed-certs-236314/id_rsa Username:docker}
	I1101 09:21:04.373789  256247 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:21:04.379229  256247 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:21:04.379281  256247 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:21:04.379293  256247 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/addons for local assets ...
	I1101 09:21:04.379364  256247 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/files for local assets ...
	I1101 09:21:04.379500  256247 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem -> 94142.pem in /etc/ssl/certs
	I1101 09:21:04.379692  256247 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:21:04.389334  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:21:04.409193  256247 start.go:296] duration metric: took 164.885201ms for postStartSetup
	I1101 09:21:04.409277  256247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:21:04.409326  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:01.003189  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:03.032694  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:46726->192.168.85.2:8443: read: connection reset by peer
	I1101 09:21:03.032778  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:03.032833  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:03.071972  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:03.072009  216020 cri.go:89] found id: "f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:21:03.072015  216020 cri.go:89] found id: ""
	I1101 09:21:03.072023  216020 logs.go:282] 2 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2]
	I1101 09:21:03.072078  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:03.077222  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:03.081318  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:03.081392  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:03.113396  216020 cri.go:89] found id: ""
	I1101 09:21:03.113424  216020 logs.go:282] 0 containers: []
	W1101 09:21:03.113435  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:03.113442  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:03.113499  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:03.149060  216020 cri.go:89] found id: ""
	I1101 09:21:03.149090  216020 logs.go:282] 0 containers: []
	W1101 09:21:03.149099  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:03.149104  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:03.149148  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:03.183012  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:03.183038  216020 cri.go:89] found id: ""
	I1101 09:21:03.183048  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:03.183109  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:03.187539  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:03.187609  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:03.223255  216020 cri.go:89] found id: ""
	I1101 09:21:03.223287  216020 logs.go:282] 0 containers: []
	W1101 09:21:03.223298  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:03.223307  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:03.223380  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:03.263664  216020 cri.go:89] found id: "df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:03.263684  216020 cri.go:89] found id: ""
	I1101 09:21:03.263691  216020 logs.go:282] 1 containers: [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd]
	I1101 09:21:03.263745  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:03.268387  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:03.268458  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:03.298659  216020 cri.go:89] found id: ""
	I1101 09:21:03.298685  216020 logs.go:282] 0 containers: []
	W1101 09:21:03.298697  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:03.298704  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:03.298760  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:03.331908  216020 cri.go:89] found id: ""
	I1101 09:21:03.331952  216020 logs.go:282] 0 containers: []
	W1101 09:21:03.331963  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:03.331985  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:03.332000  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:03.371380  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:03.371411  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:03.492909  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:03.492943  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:03.535514  216020 logs.go:123] Gathering logs for kube-controller-manager [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd] ...
	I1101 09:21:03.535554  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:03.567725  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:03.567751  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:03.624454  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:03.624492  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:03.641942  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:03.641988  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:03.724699  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:03.724722  216020 logs.go:123] Gathering logs for kube-apiserver [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2] ...
	I1101 09:21:03.724739  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:21:03.762460  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:03.762492  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:04.430167  256247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/embed-certs-236314/id_rsa Username:docker}
	I1101 09:21:04.534691  256247 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:21:04.540247  256247 fix.go:56] duration metric: took 4.919961684s for fixHost
	I1101 09:21:04.540279  256247 start.go:83] releasing machines lock for "embed-certs-236314", held for 4.920014606s
	I1101 09:21:04.540355  256247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-236314
	I1101 09:21:04.562607  256247 ssh_runner.go:195] Run: cat /version.json
	I1101 09:21:04.562640  256247 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:21:04.562664  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:04.562717  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:04.586247  256247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/embed-certs-236314/id_rsa Username:docker}
	I1101 09:21:04.586573  256247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/embed-certs-236314/id_rsa Username:docker}
	I1101 09:21:04.756853  256247 ssh_runner.go:195] Run: systemctl --version
	I1101 09:21:04.764787  256247 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:21:04.802673  256247 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:21:04.807927  256247 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:21:04.808018  256247 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:21:04.816976  256247 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:21:04.817002  256247 start.go:496] detecting cgroup driver to use...
	I1101 09:21:04.817034  256247 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:21:04.817072  256247 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:21:04.832478  256247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:21:04.845735  256247 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:21:04.845796  256247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:21:04.863128  256247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:21:04.880355  256247 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:21:04.984886  256247 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:21:05.081764  256247 docker.go:234] disabling docker service ...
	I1101 09:21:05.081839  256247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:21:05.098206  256247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:21:05.114058  256247 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:21:05.223021  256247 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:21:05.317262  256247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:21:05.333800  256247 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:21:05.353462  256247 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:21:05.353523  256247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:05.366081  256247 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:21:05.366163  256247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:05.377341  256247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:05.389161  256247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:05.399923  256247 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:21:05.411569  256247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:05.422482  256247 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:05.433571  256247 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:05.444037  256247 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:21:05.453109  256247 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:21:05.462316  256247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:21:05.552996  256247 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:21:05.672436  256247 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:21:05.672507  256247 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:21:05.677194  256247 start.go:564] Will wait 60s for crictl version
	I1101 09:21:05.677262  256247 ssh_runner.go:195] Run: which crictl
	I1101 09:21:05.681691  256247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:21:05.712910  256247 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:21:05.713006  256247 ssh_runner.go:195] Run: crio --version
	I1101 09:21:05.749945  256247 ssh_runner.go:195] Run: crio --version
	I1101 09:21:05.785002  256247 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Nov 01 09:20:28 no-preload-397460 crio[560]: time="2025-11-01T09:20:28.867021137Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:20:28 no-preload-397460 crio[560]: time="2025-11-01T09:20:28.870753974Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:20:28 no-preload-397460 crio[560]: time="2025-11-01T09:20:28.870786004Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.032956939Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d22c7ff2-4e04-4250-a489-54a647296eda name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.035884802Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=80e8d0f5-9f94-4c85-9ad0-e866d6a087ee name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.039177164Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk/dashboard-metrics-scraper" id=91030c83-554b-4a8c-a9d1-d447ae9a9b59 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.039317502Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.046277936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.046716124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.075060785Z" level=info msg="Created container 6dc0e61dffecc42ec75132038f2cd325ebcc36b066eb79f9d644c0a144d6656c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk/dashboard-metrics-scraper" id=91030c83-554b-4a8c-a9d1-d447ae9a9b59 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.075716902Z" level=info msg="Starting container: 6dc0e61dffecc42ec75132038f2cd325ebcc36b066eb79f9d644c0a144d6656c" id=bfc3a31b-0583-49f5-b70b-68d2b292af6c name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.077594718Z" level=info msg="Started container" PID=1745 containerID=6dc0e61dffecc42ec75132038f2cd325ebcc36b066eb79f9d644c0a144d6656c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk/dashboard-metrics-scraper id=bfc3a31b-0583-49f5-b70b-68d2b292af6c name=/runtime.v1.RuntimeService/StartContainer sandboxID=0fffdfd7a1db85635071e32b10e8e0adcd42227943175d281114edde3f437b74
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.133190526Z" level=info msg="Removing container: d2f3bc7f04da9b00ddee1f4f9529af8fb1606fb48b94f1c2e998026782488924" id=33aad358-f361-40ed-984d-78f16aee70e7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.142992183Z" level=info msg="Removed container d2f3bc7f04da9b00ddee1f4f9529af8fb1606fb48b94f1c2e998026782488924: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk/dashboard-metrics-scraper" id=33aad358-f361-40ed-984d-78f16aee70e7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.160754599Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=945614cb-d248-4411-9665-6a034929fa52 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.161765852Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6bc6bd02-8499-45cc-962e-66e2a624e536 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.163071914Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=9fc7cf02-dbfe-46ce-bb91-d5e5e475e96c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.163224959Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.167829683Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.168028388Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9cc9774ec6cfe4c2139ee2bc1e61266e53450df6187adfe307c3fef6052b6722/merged/etc/passwd: no such file or directory"
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.168054784Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9cc9774ec6cfe4c2139ee2bc1e61266e53450df6187adfe307c3fef6052b6722/merged/etc/group: no such file or directory"
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.168327099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.198229242Z" level=info msg="Created container a32fea6f71cfef9621e6124cb4e8c00bce86e68063a63d6eead05eb0b30c3fd2: kube-system/storage-provisioner/storage-provisioner" id=9fc7cf02-dbfe-46ce-bb91-d5e5e475e96c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.198890909Z" level=info msg="Starting container: a32fea6f71cfef9621e6124cb4e8c00bce86e68063a63d6eead05eb0b30c3fd2" id=edcaba22-7191-4881-9053-e53519a68d8e name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.200662128Z" level=info msg="Started container" PID=1759 containerID=a32fea6f71cfef9621e6124cb4e8c00bce86e68063a63d6eead05eb0b30c3fd2 description=kube-system/storage-provisioner/storage-provisioner id=edcaba22-7191-4881-9053-e53519a68d8e name=/runtime.v1.RuntimeService/StartContainer sandboxID=ef7af1c6a35d25eb2133ac2f491e4a3ab62a55ded9568dfe79df80175e1e9df1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a32fea6f71cfe       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   ef7af1c6a35d2       storage-provisioner                          kube-system
	6dc0e61dffecc       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago      Exited              dashboard-metrics-scraper   2                   0fffdfd7a1db8       dashboard-metrics-scraper-6ffb444bf9-mxwvk   kubernetes-dashboard
	6fcf14948dd8f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   63e6751da3226       kubernetes-dashboard-855c9754f9-89s5g        kubernetes-dashboard
	e4ac2478e800b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   9c0d28f1b654b       busybox                                      default
	7f8c011365813       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           48 seconds ago      Running             coredns                     0                   41d16bfc949c3       coredns-66bc5c9577-z5578                     kube-system
	e68d2c72f2a28       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   ef7af1c6a35d2       storage-provisioner                          kube-system
	4e0ef8237ebe0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   ed241a8f8412d       kindnet-lddf5                                kube-system
	60a457721a2e9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           48 seconds ago      Running             kube-proxy                  0                   68578b483307b       kube-proxy-5kpft                             kube-system
	fd6c2e5673978       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           51 seconds ago      Running             etcd                        0                   10169d1e422b2       etcd-no-preload-397460                       kube-system
	1e65eafe05118       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           51 seconds ago      Running             kube-controller-manager     0                   82a2de5dc7246       kube-controller-manager-no-preload-397460    kube-system
	1b44261dff64d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           51 seconds ago      Running             kube-apiserver              0                   02178c5e6fb24       kube-apiserver-no-preload-397460             kube-system
	a154077d09e97       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           51 seconds ago      Running             kube-scheduler              0                   1751a62d0fb12       kube-scheduler-no-preload-397460             kube-system
	
	
	==> coredns [7f8c011365813d9b9a3fc4de470f15a11f84b8fc7448785cac2980d200ab6328] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44025 - 14277 "HINFO IN 5301831790333311994.5848890886534015879. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.524150284s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-397460
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-397460
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=no-preload-397460
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_19_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:19:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-397460
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:20:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:20:47 +0000   Sat, 01 Nov 2025 09:19:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:20:47 +0000   Sat, 01 Nov 2025 09:19:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:20:47 +0000   Sat, 01 Nov 2025 09:19:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:20:47 +0000   Sat, 01 Nov 2025 09:19:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-397460
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                782711df-25d2-4083-899f-9ab94eb16882
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-z5578                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-no-preload-397460                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-lddf5                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-no-preload-397460              250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-no-preload-397460     200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-5kpft                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-no-preload-397460              100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-mxwvk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-89s5g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 48s                kube-proxy       
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  110s               kubelet          Node no-preload-397460 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s               kubelet          Node no-preload-397460 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s               kubelet          Node no-preload-397460 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           106s               node-controller  Node no-preload-397460 event: Registered Node no-preload-397460 in Controller
	  Normal  NodeReady                92s                kubelet          Node no-preload-397460 status is now: NodeReady
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node no-preload-397460 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node no-preload-397460 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)  kubelet          Node no-preload-397460 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                node-controller  Node no-preload-397460 event: Registered Node no-preload-397460 in Controller
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.047726] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[Nov 1 08:38] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.215674] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.047939] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000023] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023915] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023867] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023880] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000033] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023884] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +4.031524] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +8.255123] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	
	
	==> etcd [fd6c2e567397890a7f512b409824d618cc40968b819d316cdae2a59eaaeee805] <==
	{"level":"warn","ts":"2025-11-01T09:20:16.643322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.650024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.659542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.666374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.673706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.682857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.689516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.696047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.703397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.710728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.718312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.727141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.736653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.750705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.758989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.765783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.772695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.780911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.788611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.796916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.803928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.823325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.830434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.838541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.892149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47654","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:21:07 up  1:03,  0 user,  load average: 2.81, 2.49, 1.53
	Linux no-preload-397460 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4e0ef8237ebe095c9933e907c2062eaeb0996060e1bec0bf09d7e957557ac6f6] <==
	I1101 09:20:18.555147       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:20:18.555455       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 09:20:18.555620       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:20:18.555642       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:20:18.555667       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:20:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:20:18.845543       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:20:18.845590       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:20:18.845604       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:20:18.845731       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:20:19.346847       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:20:19.346924       1 metrics.go:72] Registering metrics
	I1101 09:20:19.347014       1 controller.go:711] "Syncing nftables rules"
	I1101 09:20:28.845700       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:20:28.845776       1 main.go:301] handling current node
	I1101 09:20:38.846445       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:20:38.846473       1 main.go:301] handling current node
	I1101 09:20:48.845950       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:20:48.846003       1 main.go:301] handling current node
	I1101 09:20:58.852971       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:20:58.853010       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1b44261dff64d3c4c47d37a621f72a585d7a917cf393d156278c5cbcf49d2100] <==
	I1101 09:20:17.483023       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 09:20:17.483138       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:20:17.483274       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 09:20:17.483330       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:20:17.483553       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:20:17.483573       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:20:17.483580       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:20:17.483586       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:20:17.483810       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:20:17.483827       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:20:17.485798       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:20:17.490505       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 09:20:17.490724       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:20:17.504242       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 09:20:17.818462       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:20:17.852135       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:20:17.875851       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:20:17.885386       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:20:17.897101       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:20:17.940507       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.166.253"}
	I1101 09:20:17.952569       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.121.40"}
	I1101 09:20:18.388643       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:20:21.258023       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:20:21.307923       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:20:21.360399       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [1e65eafe05118922eba4075b65156c842b7bb2e5dc4b74d48586e74e8830e4ad] <==
	I1101 09:20:20.834459       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:20:20.838740       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:20:20.838895       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:20:20.840221       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:20:20.840322       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:20:20.844615       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 09:20:20.854119       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:20:20.854217       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 09:20:20.854390       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:20:20.854410       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:20:20.854418       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:20:20.854546       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:20:20.855390       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 09:20:20.864144       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:20:20.864192       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:20:20.864234       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:20:20.864241       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:20:20.864248       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:20:20.868102       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:20:20.868235       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:20:20.868427       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-397460"
	I1101 09:20:20.868495       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:20:20.871742       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:20:20.873915       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:20:20.881256       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-proxy [60a457721a2e92a962fa3f6f4b3be7038d081f4c316d20147a3d72dd30c7da7e] <==
	I1101 09:20:18.439846       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:20:18.523223       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:20:18.624167       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:20:18.624219       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1101 09:20:18.624319       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:20:18.644248       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:20:18.644304       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:20:18.649696       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:20:18.650068       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:20:18.650105       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:20:18.652073       1 config.go:309] "Starting node config controller"
	I1101 09:20:18.652228       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:20:18.652249       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:20:18.652308       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:20:18.652322       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:20:18.652349       1 config.go:200] "Starting service config controller"
	I1101 09:20:18.652362       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:20:18.652380       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:20:18.652395       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:20:18.752902       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:20:18.752927       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:20:18.752926       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a154077d09e972273696e9d1d20b891c240a792171f425c23b57e8599069bf1b] <==
	I1101 09:20:17.341218       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:20:18.383798       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:20:18.383851       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:20:18.390564       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:20:18.390581       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:20:18.390609       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:20:18.390610       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:20:18.390653       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:20:18.390676       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:20:18.391024       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:20:18.391431       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:20:18.490803       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:20:18.490803       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:20:18.491263       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:20:21 no-preload-397460 kubelet[705]: I1101 09:20:21.572848     705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb5v5\" (UniqueName: \"kubernetes.io/projected/0656e39c-eaf6-4c88-9863-16f00e262508-kube-api-access-cb5v5\") pod \"kubernetes-dashboard-855c9754f9-89s5g\" (UID: \"0656e39c-eaf6-4c88-9863-16f00e262508\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-89s5g"
	Nov 01 09:20:21 no-preload-397460 kubelet[705]: I1101 09:20:21.572900     705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll9d5\" (UniqueName: \"kubernetes.io/projected/be52c87c-207d-4398-a867-fb7f8c1859ab-kube-api-access-ll9d5\") pod \"dashboard-metrics-scraper-6ffb444bf9-mxwvk\" (UID: \"be52c87c-207d-4398-a867-fb7f8c1859ab\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk"
	Nov 01 09:20:21 no-preload-397460 kubelet[705]: I1101 09:20:21.572965     705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0656e39c-eaf6-4c88-9863-16f00e262508-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-89s5g\" (UID: \"0656e39c-eaf6-4c88-9863-16f00e262508\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-89s5g"
	Nov 01 09:20:25 no-preload-397460 kubelet[705]: I1101 09:20:25.087609     705 scope.go:117] "RemoveContainer" containerID="c05140c46ef4ef03bd67c307dc6d7294ebf6efa84832e5a599fe29a9d08311fc"
	Nov 01 09:20:26 no-preload-397460 kubelet[705]: I1101 09:20:26.093078     705 scope.go:117] "RemoveContainer" containerID="c05140c46ef4ef03bd67c307dc6d7294ebf6efa84832e5a599fe29a9d08311fc"
	Nov 01 09:20:26 no-preload-397460 kubelet[705]: I1101 09:20:26.093243     705 scope.go:117] "RemoveContainer" containerID="d2f3bc7f04da9b00ddee1f4f9529af8fb1606fb48b94f1c2e998026782488924"
	Nov 01 09:20:26 no-preload-397460 kubelet[705]: E1101 09:20:26.093517     705 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mxwvk_kubernetes-dashboard(be52c87c-207d-4398-a867-fb7f8c1859ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk" podUID="be52c87c-207d-4398-a867-fb7f8c1859ab"
	Nov 01 09:20:27 no-preload-397460 kubelet[705]: I1101 09:20:27.099267     705 scope.go:117] "RemoveContainer" containerID="d2f3bc7f04da9b00ddee1f4f9529af8fb1606fb48b94f1c2e998026782488924"
	Nov 01 09:20:27 no-preload-397460 kubelet[705]: E1101 09:20:27.099518     705 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mxwvk_kubernetes-dashboard(be52c87c-207d-4398-a867-fb7f8c1859ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk" podUID="be52c87c-207d-4398-a867-fb7f8c1859ab"
	Nov 01 09:20:28 no-preload-397460 kubelet[705]: I1101 09:20:28.103618     705 scope.go:117] "RemoveContainer" containerID="d2f3bc7f04da9b00ddee1f4f9529af8fb1606fb48b94f1c2e998026782488924"
	Nov 01 09:20:28 no-preload-397460 kubelet[705]: E1101 09:20:28.103796     705 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mxwvk_kubernetes-dashboard(be52c87c-207d-4398-a867-fb7f8c1859ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk" podUID="be52c87c-207d-4398-a867-fb7f8c1859ab"
	Nov 01 09:20:30 no-preload-397460 kubelet[705]: I1101 09:20:30.964239     705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-89s5g" podStartSLOduration=3.850255756 podStartE2EDuration="9.96421845s" podCreationTimestamp="2025-11-01 09:20:21 +0000 UTC" firstStartedPulling="2025-11-01 09:20:21.778371168 +0000 UTC m=+6.861819799" lastFinishedPulling="2025-11-01 09:20:27.892333863 +0000 UTC m=+12.975782493" observedRunningTime="2025-11-01 09:20:28.114332674 +0000 UTC m=+13.197781309" watchObservedRunningTime="2025-11-01 09:20:30.96421845 +0000 UTC m=+16.047667084"
	Nov 01 09:20:39 no-preload-397460 kubelet[705]: I1101 09:20:39.032419     705 scope.go:117] "RemoveContainer" containerID="d2f3bc7f04da9b00ddee1f4f9529af8fb1606fb48b94f1c2e998026782488924"
	Nov 01 09:20:39 no-preload-397460 kubelet[705]: I1101 09:20:39.131839     705 scope.go:117] "RemoveContainer" containerID="d2f3bc7f04da9b00ddee1f4f9529af8fb1606fb48b94f1c2e998026782488924"
	Nov 01 09:20:39 no-preload-397460 kubelet[705]: I1101 09:20:39.132078     705 scope.go:117] "RemoveContainer" containerID="6dc0e61dffecc42ec75132038f2cd325ebcc36b066eb79f9d644c0a144d6656c"
	Nov 01 09:20:39 no-preload-397460 kubelet[705]: E1101 09:20:39.132271     705 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mxwvk_kubernetes-dashboard(be52c87c-207d-4398-a867-fb7f8c1859ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk" podUID="be52c87c-207d-4398-a867-fb7f8c1859ab"
	Nov 01 09:20:46 no-preload-397460 kubelet[705]: I1101 09:20:46.899602     705 scope.go:117] "RemoveContainer" containerID="6dc0e61dffecc42ec75132038f2cd325ebcc36b066eb79f9d644c0a144d6656c"
	Nov 01 09:20:46 no-preload-397460 kubelet[705]: E1101 09:20:46.899895     705 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mxwvk_kubernetes-dashboard(be52c87c-207d-4398-a867-fb7f8c1859ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk" podUID="be52c87c-207d-4398-a867-fb7f8c1859ab"
	Nov 01 09:20:49 no-preload-397460 kubelet[705]: I1101 09:20:49.160405     705 scope.go:117] "RemoveContainer" containerID="e68d2c72f2a280bb06520d0abda5c4e4af21e514107ea8a34112ce09ad3363e9"
	Nov 01 09:20:58 no-preload-397460 kubelet[705]: I1101 09:20:58.032299     705 scope.go:117] "RemoveContainer" containerID="6dc0e61dffecc42ec75132038f2cd325ebcc36b066eb79f9d644c0a144d6656c"
	Nov 01 09:20:58 no-preload-397460 kubelet[705]: E1101 09:20:58.032498     705 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mxwvk_kubernetes-dashboard(be52c87c-207d-4398-a867-fb7f8c1859ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk" podUID="be52c87c-207d-4398-a867-fb7f8c1859ab"
	Nov 01 09:21:03 no-preload-397460 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:21:03 no-preload-397460 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:21:03 no-preload-397460 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:21:03 no-preload-397460 systemd[1]: kubelet.service: Consumed 1.678s CPU time.
	
	
	==> kubernetes-dashboard [6fcf14948dd8fe2f7f1fbb5fe1c692b7d291220499560edc384c983cbef16d2e] <==
	2025/11/01 09:20:27 Starting overwatch
	2025/11/01 09:20:27 Using namespace: kubernetes-dashboard
	2025/11/01 09:20:27 Using in-cluster config to connect to apiserver
	2025/11/01 09:20:27 Using secret token for csrf signing
	2025/11/01 09:20:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:20:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:20:27 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:20:27 Generating JWE encryption key
	2025/11/01 09:20:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:20:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:20:28 Initializing JWE encryption key from synchronized object
	2025/11/01 09:20:28 Creating in-cluster Sidecar client
	2025/11/01 09:20:28 Serving insecurely on HTTP port: 9090
	2025/11/01 09:20:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:20:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a32fea6f71cfef9621e6124cb4e8c00bce86e68063a63d6eead05eb0b30c3fd2] <==
	I1101 09:20:49.215710       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:20:49.227749       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:20:49.227809       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:20:49.230145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:52.684968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:56.946083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:00.544622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:03.598780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:06.622045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:06.627551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:21:06.627756       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:21:06.628013       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-397460_05688139-5583-422a-81e1-05289b080d64!
	I1101 09:21:06.628358       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b6e7d43-9839-4337-9d32-0088bf11071a", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-397460_05688139-5583-422a-81e1-05289b080d64 became leader
	W1101 09:21:06.631819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:06.636180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:21:06.729310       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-397460_05688139-5583-422a-81e1-05289b080d64!
	
	
	==> storage-provisioner [e68d2c72f2a280bb06520d0abda5c4e4af21e514107ea8a34112ce09ad3363e9] <==
	I1101 09:20:18.416106       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:20:48.418413       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-397460 -n no-preload-397460
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-397460 -n no-preload-397460: exit status 2 (420.000533ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-397460 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-397460
helpers_test.go:243: (dbg) docker inspect no-preload-397460:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a",
	        "Created": "2025-11-01T09:18:48.77329288Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 249262,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:20:08.166757565Z",
	            "FinishedAt": "2025-11-01T09:20:07.010214475Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a/hostname",
	        "HostsPath": "/var/lib/docker/containers/dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a/hosts",
	        "LogPath": "/var/lib/docker/containers/dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a/dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a-json.log",
	        "Name": "/no-preload-397460",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-397460:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-397460",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dcacf8ef764db0b040f92377cfa21ce046a2a4c6f431a1c080345899f64f1a0a",
	                "LowerDir": "/var/lib/docker/overlay2/0b34dab8141c8641f76f199b5dd54ea0b7163a5882ccc5e46e7cd5e259fdb760-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0b34dab8141c8641f76f199b5dd54ea0b7163a5882ccc5e46e7cd5e259fdb760/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0b34dab8141c8641f76f199b5dd54ea0b7163a5882ccc5e46e7cd5e259fdb760/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0b34dab8141c8641f76f199b5dd54ea0b7163a5882ccc5e46e7cd5e259fdb760/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-397460",
	                "Source": "/var/lib/docker/volumes/no-preload-397460/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-397460",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-397460",
	                "name.minikube.sigs.k8s.io": "no-preload-397460",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6820c3172cb4737b686620bb279efbd06b0f6b816473a97ed9eae5815e4fc5bf",
	            "SandboxKey": "/var/run/docker/netns/6820c3172cb4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-397460": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:ea:ca:6f:51:99",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc24cbf1ada0b118eca4d07595495e7c99849b988767800f68d20e97764309c9",
	                    "EndpointID": "cdf70b272ddb87e42cb3c427a1e3923f479097ad8a282bb886cd88f5c348ab81",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-397460",
	                        "dcacf8ef764d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397460 -n no-preload-397460
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397460 -n no-preload-397460: exit status 2 (426.355368ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-397460 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-397460 logs -n 25: (1.464550922s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-204434 sudo crio config                                                                                                                                                                                                             │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │                     │
	│ delete  │ -p cilium-204434                                                                                                                                                                                                                              │ cilium-204434          │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:18 UTC │
	│ start   │ -p old-k8s-version-152344 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:19 UTC │
	│ delete  │ -p running-upgrade-274843                                                                                                                                                                                                                     │ running-upgrade-274843 │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:18 UTC │
	│ start   │ -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-152344 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ stop    │ -p old-k8s-version-152344 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ start   │ -p cert-expiration-303094 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-303094 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-397460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ delete  │ -p cert-expiration-303094                                                                                                                                                                                                                     │ cert-expiration-303094 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ stop    │ -p no-preload-397460 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p embed-certs-236314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-152344 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ start   │ -p old-k8s-version-152344 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p no-preload-397460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-236314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │                     │
	│ stop    │ -p embed-certs-236314 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-236314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p embed-certs-236314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-236314     │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │                     │
	│ image   │ old-k8s-version-152344 image list --format=json                                                                                                                                                                                               │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p old-k8s-version-152344 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ image   │ no-preload-397460 image list --format=json                                                                                                                                                                                                    │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p no-preload-397460 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-397460      │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ delete  │ -p old-k8s-version-152344                                                                                                                                                                                                                     │ old-k8s-version-152344 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:20:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:20:59.428556  256247 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:20:59.428818  256247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:20:59.428829  256247 out.go:374] Setting ErrFile to fd 2...
	I1101 09:20:59.428834  256247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:20:59.429086  256247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:20:59.429532  256247 out.go:368] Setting JSON to false
	I1101 09:20:59.430786  256247 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3807,"bootTime":1761985052,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:20:59.430894  256247 start.go:143] virtualization: kvm guest
	I1101 09:20:59.433043  256247 out.go:179] * [embed-certs-236314] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:20:59.434385  256247 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:20:59.434401  256247 notify.go:221] Checking for updates...
	I1101 09:20:59.436934  256247 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:20:59.438200  256247 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:20:59.439375  256247 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:20:59.440741  256247 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:20:59.441964  256247 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:20:59.443571  256247 config.go:182] Loaded profile config "embed-certs-236314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:20:59.444071  256247 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:20:59.469169  256247 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:20:59.469258  256247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:20:59.527156  256247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 09:20:59.517141551 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:20:59.527271  256247 docker.go:319] overlay module found
	I1101 09:20:59.529297  256247 out.go:179] * Using the docker driver based on existing profile
	I1101 09:20:59.530647  256247 start.go:309] selected driver: docker
	I1101 09:20:59.530667  256247 start.go:930] validating driver "docker" against &{Name:embed-certs-236314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-236314 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:20:59.530767  256247 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:20:59.531375  256247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:20:59.591821  256247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 09:20:59.581513893 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:20:59.592141  256247 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:20:59.592171  256247 cni.go:84] Creating CNI manager for ""
	I1101 09:20:59.592221  256247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:20:59.592272  256247 start.go:353] cluster config:
	{Name:embed-certs-236314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-236314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:20:59.594010  256247 out.go:179] * Starting "embed-certs-236314" primary control-plane node in "embed-certs-236314" cluster
	I1101 09:20:59.595373  256247 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:20:59.596730  256247 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:20:59.598034  256247 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:20:59.598070  256247 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:20:59.598094  256247 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:20:59.598109  256247 cache.go:59] Caching tarball of preloaded images
	I1101 09:20:59.598199  256247 preload.go:233] Found /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:20:59.598213  256247 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:20:59.598344  256247 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/embed-certs-236314/config.json ...
	I1101 09:20:59.620135  256247 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:20:59.620159  256247 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:20:59.620175  256247 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:20:59.620198  256247 start.go:360] acquireMachinesLock for embed-certs-236314: {Name:mk8eda201f80ebfb2f2bb01891a2b839f76263b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:20:59.620253  256247 start.go:364] duration metric: took 37.33µs to acquireMachinesLock for "embed-certs-236314"
	I1101 09:20:59.620271  256247 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:20:59.620276  256247 fix.go:54] fixHost starting: 
	I1101 09:20:59.620535  256247 cli_runner.go:164] Run: docker container inspect embed-certs-236314 --format={{.State.Status}}
	I1101 09:20:59.639500  256247 fix.go:112] recreateIfNeeded on embed-certs-236314: state=Stopped err=<nil>
	W1101 09:20:59.639552  256247 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:20:58.468250  216020 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.063103618s)
	W1101 09:20:58.468297  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1101 09:20:58.468307  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:20:58.468322  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:20:59.641465  256247 out.go:252] * Restarting existing docker container for "embed-certs-236314" ...
	I1101 09:20:59.641539  256247 cli_runner.go:164] Run: docker start embed-certs-236314
	I1101 09:20:59.902231  256247 cli_runner.go:164] Run: docker container inspect embed-certs-236314 --format={{.State.Status}}
	I1101 09:20:59.922418  256247 kic.go:430] container "embed-certs-236314" state is running.
	I1101 09:20:59.922774  256247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-236314
	I1101 09:20:59.942287  256247 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/embed-certs-236314/config.json ...
	I1101 09:20:59.942510  256247 machine.go:94] provisionDockerMachine start ...
	I1101 09:20:59.942564  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:20:59.962745  256247 main.go:143] libmachine: Using SSH client type: native
	I1101 09:20:59.963041  256247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 09:20:59.963056  256247 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:20:59.963759  256247 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36770->127.0.0.1:33078: read: connection reset by peer
	I1101 09:21:03.117563  256247 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-236314
	
	I1101 09:21:03.117594  256247 ubuntu.go:182] provisioning hostname "embed-certs-236314"
	I1101 09:21:03.117656  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:03.142432  256247 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:03.142735  256247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 09:21:03.142750  256247 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-236314 && echo "embed-certs-236314" | sudo tee /etc/hostname
	I1101 09:21:03.305920  256247 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-236314
	
	I1101 09:21:03.306024  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:03.331383  256247 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:03.331747  256247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 09:21:03.331779  256247 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-236314' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-236314/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-236314' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:21:03.490967  256247 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:21:03.491130  256247 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5913/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5913/.minikube}
	I1101 09:21:03.491284  256247 ubuntu.go:190] setting up certificates
	I1101 09:21:03.491299  256247 provision.go:84] configureAuth start
	I1101 09:21:03.491599  256247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-236314
	I1101 09:21:03.518311  256247 provision.go:143] copyHostCerts
	I1101 09:21:03.518365  256247 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem, removing ...
	I1101 09:21:03.518385  256247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem
	I1101 09:21:03.518461  256247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem (1078 bytes)
	I1101 09:21:03.518591  256247 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem, removing ...
	I1101 09:21:03.518605  256247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem
	I1101 09:21:03.518646  256247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem (1123 bytes)
	I1101 09:21:03.518803  256247 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem, removing ...
	I1101 09:21:03.518819  256247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem
	I1101 09:21:03.518904  256247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem (1675 bytes)
	I1101 09:21:03.519019  256247 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem org=jenkins.embed-certs-236314 san=[127.0.0.1 192.168.76.2 embed-certs-236314 localhost minikube]
	I1101 09:21:03.680093  256247 provision.go:177] copyRemoteCerts
	I1101 09:21:03.680157  256247 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:21:03.680200  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:03.704893  256247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/embed-certs-236314/id_rsa Username:docker}
	I1101 09:21:03.816511  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:21:03.837369  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 09:21:03.857322  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:21:03.878326  256247 provision.go:87] duration metric: took 387.013499ms to configureAuth
	I1101 09:21:03.878358  256247 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:21:03.878577  256247 config.go:182] Loaded profile config "embed-certs-236314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:03.878713  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:03.901022  256247 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:03.901326  256247 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1101 09:21:03.901362  256247 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:21:04.244254  256247 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:21:04.244281  256247 machine.go:97] duration metric: took 4.30175559s to provisionDockerMachine
	I1101 09:21:04.244294  256247 start.go:293] postStartSetup for "embed-certs-236314" (driver="docker")
	I1101 09:21:04.244307  256247 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:21:04.244385  256247 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:21:04.244488  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:04.266806  256247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/embed-certs-236314/id_rsa Username:docker}
	I1101 09:21:04.373789  256247 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:21:04.379229  256247 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:21:04.379281  256247 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:21:04.379293  256247 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/addons for local assets ...
	I1101 09:21:04.379364  256247 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/files for local assets ...
	I1101 09:21:04.379500  256247 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem -> 94142.pem in /etc/ssl/certs
	I1101 09:21:04.379692  256247 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:21:04.389334  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:21:04.409193  256247 start.go:296] duration metric: took 164.885201ms for postStartSetup
	I1101 09:21:04.409277  256247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:21:04.409326  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:01.003189  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:03.032694  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:46726->192.168.85.2:8443: read: connection reset by peer
	I1101 09:21:03.032778  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:03.032833  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:03.071972  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:03.072009  216020 cri.go:89] found id: "f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:21:03.072015  216020 cri.go:89] found id: ""
	I1101 09:21:03.072023  216020 logs.go:282] 2 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2]
	I1101 09:21:03.072078  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:03.077222  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:03.081318  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:03.081392  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:03.113396  216020 cri.go:89] found id: ""
	I1101 09:21:03.113424  216020 logs.go:282] 0 containers: []
	W1101 09:21:03.113435  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:03.113442  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:03.113499  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:03.149060  216020 cri.go:89] found id: ""
	I1101 09:21:03.149090  216020 logs.go:282] 0 containers: []
	W1101 09:21:03.149099  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:03.149104  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:03.149148  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:03.183012  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:03.183038  216020 cri.go:89] found id: ""
	I1101 09:21:03.183048  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:03.183109  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:03.187539  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:03.187609  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:03.223255  216020 cri.go:89] found id: ""
	I1101 09:21:03.223287  216020 logs.go:282] 0 containers: []
	W1101 09:21:03.223298  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:03.223307  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:03.223380  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:03.263664  216020 cri.go:89] found id: "df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:03.263684  216020 cri.go:89] found id: ""
	I1101 09:21:03.263691  216020 logs.go:282] 1 containers: [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd]
	I1101 09:21:03.263745  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:03.268387  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:03.268458  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:03.298659  216020 cri.go:89] found id: ""
	I1101 09:21:03.298685  216020 logs.go:282] 0 containers: []
	W1101 09:21:03.298697  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:03.298704  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:03.298760  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:03.331908  216020 cri.go:89] found id: ""
	I1101 09:21:03.331952  216020 logs.go:282] 0 containers: []
	W1101 09:21:03.331963  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:03.331985  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:03.332000  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:03.371380  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:03.371411  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:03.492909  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:03.492943  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:03.535514  216020 logs.go:123] Gathering logs for kube-controller-manager [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd] ...
	I1101 09:21:03.535554  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:03.567725  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:03.567751  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:03.624454  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:03.624492  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:03.641942  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:03.641988  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:03.724699  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:03.724722  216020 logs.go:123] Gathering logs for kube-apiserver [f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2] ...
	I1101 09:21:03.724739  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f1d1ad774988f50a8cc9b852d65649e1e6961c9ea7280d9c95161580ca21bce2"
	I1101 09:21:03.762460  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:03.762492  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:04.430167  256247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/embed-certs-236314/id_rsa Username:docker}
	I1101 09:21:04.534691  256247 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:21:04.540247  256247 fix.go:56] duration metric: took 4.919961684s for fixHost
	I1101 09:21:04.540279  256247 start.go:83] releasing machines lock for "embed-certs-236314", held for 4.920014606s
	I1101 09:21:04.540355  256247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-236314
	I1101 09:21:04.562607  256247 ssh_runner.go:195] Run: cat /version.json
	I1101 09:21:04.562640  256247 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:21:04.562664  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:04.562717  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:04.586247  256247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/embed-certs-236314/id_rsa Username:docker}
	I1101 09:21:04.586573  256247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/embed-certs-236314/id_rsa Username:docker}
	I1101 09:21:04.756853  256247 ssh_runner.go:195] Run: systemctl --version
	I1101 09:21:04.764787  256247 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:21:04.802673  256247 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:21:04.807927  256247 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:21:04.808018  256247 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:21:04.816976  256247 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:21:04.817002  256247 start.go:496] detecting cgroup driver to use...
	I1101 09:21:04.817034  256247 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:21:04.817072  256247 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:21:04.832478  256247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:21:04.845735  256247 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:21:04.845796  256247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:21:04.863128  256247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:21:04.880355  256247 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:21:04.984886  256247 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:21:05.081764  256247 docker.go:234] disabling docker service ...
	I1101 09:21:05.081839  256247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:21:05.098206  256247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:21:05.114058  256247 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:21:05.223021  256247 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:21:05.317262  256247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:21:05.333800  256247 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:21:05.353462  256247 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:21:05.353523  256247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:05.366081  256247 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:21:05.366163  256247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:05.377341  256247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:05.389161  256247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:05.399923  256247 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:21:05.411569  256247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:05.422482  256247 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:05.433571  256247 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:05.444037  256247 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:21:05.453109  256247 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:21:05.462316  256247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:21:05.552996  256247 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:21:05.672436  256247 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:21:05.672507  256247 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:21:05.677194  256247 start.go:564] Will wait 60s for crictl version
	I1101 09:21:05.677262  256247 ssh_runner.go:195] Run: which crictl
	I1101 09:21:05.681691  256247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:21:05.712910  256247 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:21:05.713006  256247 ssh_runner.go:195] Run: crio --version
	I1101 09:21:05.749945  256247 ssh_runner.go:195] Run: crio --version
	I1101 09:21:05.785002  256247 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:21:05.786452  256247 cli_runner.go:164] Run: docker network inspect embed-certs-236314 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:21:05.806545  256247 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 09:21:05.811035  256247 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:21:05.823070  256247 kubeadm.go:884] updating cluster {Name:embed-certs-236314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-236314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:21:05.823222  256247 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:21:05.823290  256247 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:21:05.860442  256247 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:21:05.860469  256247 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:21:05.860523  256247 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:21:05.894418  256247 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:21:05.894442  256247 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:21:05.894452  256247 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1101 09:21:05.894585  256247 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-236314 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-236314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:21:05.894682  256247 ssh_runner.go:195] Run: crio config
	I1101 09:21:05.958193  256247 cni.go:84] Creating CNI manager for ""
	I1101 09:21:05.958220  256247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:21:05.958241  256247 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:21:05.958272  256247 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-236314 NodeName:embed-certs-236314 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:21:05.958417  256247 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-236314"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:21:05.958480  256247 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:21:05.968246  256247 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:21:05.968333  256247 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:21:05.979913  256247 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1101 09:21:05.995392  256247 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:21:06.010488  256247 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1101 09:21:06.026746  256247 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:21:06.031280  256247 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:21:06.042676  256247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:21:06.143161  256247 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:21:06.169080  256247 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/embed-certs-236314 for IP: 192.168.76.2
	I1101 09:21:06.169098  256247 certs.go:195] generating shared ca certs ...
	I1101 09:21:06.169113  256247 certs.go:227] acquiring lock for ca certs: {Name:mkfdee6a84670347521013ebeef165551380cb9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:06.169257  256247 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key
	I1101 09:21:06.169294  256247 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key
	I1101 09:21:06.169304  256247 certs.go:257] generating profile certs ...
	I1101 09:21:06.169377  256247 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/embed-certs-236314/client.key
	I1101 09:21:06.169429  256247 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/embed-certs-236314/apiserver.key.a5b65e95
	I1101 09:21:06.169461  256247 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/embed-certs-236314/proxy-client.key
	I1101 09:21:06.169583  256247 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem (1338 bytes)
	W1101 09:21:06.169614  256247 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414_empty.pem, impossibly tiny 0 bytes
	I1101 09:21:06.169623  256247 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:21:06.169646  256247 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:21:06.169668  256247 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:21:06.169690  256247 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem (1675 bytes)
	I1101 09:21:06.169724  256247 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:21:06.170400  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:21:06.190826  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:21:06.212357  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:21:06.235094  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:21:06.261595  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/embed-certs-236314/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 09:21:06.291497  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/embed-certs-236314/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:21:06.313723  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/embed-certs-236314/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:21:06.335708  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/embed-certs-236314/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:21:06.359169  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem --> /usr/share/ca-certificates/9414.pem (1338 bytes)
	I1101 09:21:06.381284  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /usr/share/ca-certificates/94142.pem (1708 bytes)
	I1101 09:21:06.406335  256247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:21:06.432761  256247 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:21:06.449574  256247 ssh_runner.go:195] Run: openssl version
	I1101 09:21:06.457196  256247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9414.pem && ln -fs /usr/share/ca-certificates/9414.pem /etc/ssl/certs/9414.pem"
	I1101 09:21:06.469491  256247 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9414.pem
	I1101 09:21:06.474373  256247 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:35 /usr/share/ca-certificates/9414.pem
	I1101 09:21:06.474440  256247 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9414.pem
	I1101 09:21:06.524681  256247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9414.pem /etc/ssl/certs/51391683.0"
	I1101 09:21:06.536206  256247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94142.pem && ln -fs /usr/share/ca-certificates/94142.pem /etc/ssl/certs/94142.pem"
	I1101 09:21:06.549451  256247 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94142.pem
	I1101 09:21:06.554798  256247 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:35 /usr/share/ca-certificates/94142.pem
	I1101 09:21:06.554858  256247 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94142.pem
	I1101 09:21:06.605485  256247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94142.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:21:06.614903  256247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:21:06.627025  256247 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:21:06.632680  256247 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:21:06.632811  256247 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:21:06.678623  256247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:21:06.689397  256247 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:21:06.694203  256247 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:21:06.751379  256247 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:21:06.811187  256247 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:21:06.885004  256247 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:21:06.952131  256247 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:21:07.016714  256247 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:21:07.073129  256247 kubeadm.go:401] StartCluster: {Name:embed-certs-236314 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-236314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:21:07.073339  256247 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:21:07.073411  256247 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:21:07.120675  256247 cri.go:89] found id: "bca3056e4356124989f2b2cba8377cf3f660970574583fcca877cb776005e6ca"
	I1101 09:21:07.120700  256247 cri.go:89] found id: "cdf866b372073a7755ed447cdf8634d89a5c22e16db02cc9cfe7c76643d51a6c"
	I1101 09:21:07.120705  256247 cri.go:89] found id: "c53066ca825ef150c1b3480d4c681c275883620b56bfc97b3e50480bdd6dc761"
	I1101 09:21:07.120710  256247 cri.go:89] found id: "63c22508cf7059b3b3f3d3dca5c0c8bae9ba37801ed8914d301b3b69f0fc7f4d"
	I1101 09:21:07.120714  256247 cri.go:89] found id: ""
	I1101 09:21:07.120763  256247 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:21:07.143127  256247 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:21:07Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:21:07.143198  256247 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:21:07.154796  256247 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:21:07.154817  256247 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:21:07.154928  256247 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:21:07.168805  256247 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:21:07.169762  256247 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-236314" does not appear in /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:21:07.170370  256247 kubeconfig.go:62] /home/jenkins/minikube-integration/21835-5913/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-236314" cluster setting kubeconfig missing "embed-certs-236314" context setting]
	I1101 09:21:07.171354  256247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:07.177098  256247 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:21:07.188909  256247 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 09:21:07.189031  256247 kubeadm.go:602] duration metric: took 34.205793ms to restartPrimaryControlPlane
	I1101 09:21:07.189671  256247 kubeadm.go:403] duration metric: took 116.610912ms to StartCluster
	I1101 09:21:07.189716  256247 settings.go:142] acquiring lock: {Name:mkb1ba7d0d4bb15f3f0746ce486d72703f901580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:07.189824  256247 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:21:07.192220  256247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:07.192824  256247 config.go:182] Loaded profile config "embed-certs-236314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:07.192893  256247 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:21:07.192978  256247 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-236314"
	I1101 09:21:07.192996  256247 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-236314"
	W1101 09:21:07.193004  256247 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:21:07.193025  256247 host.go:66] Checking if "embed-certs-236314" exists ...
	I1101 09:21:07.193246  256247 addons.go:70] Setting dashboard=true in profile "embed-certs-236314"
	I1101 09:21:07.193297  256247 addons.go:239] Setting addon dashboard=true in "embed-certs-236314"
	I1101 09:21:07.193306  256247 addons.go:70] Setting default-storageclass=true in profile "embed-certs-236314"
	I1101 09:21:07.193371  256247 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-236314"
	I1101 09:21:07.193502  256247 cli_runner.go:164] Run: docker container inspect embed-certs-236314 --format={{.State.Status}}
	I1101 09:21:07.193369  256247 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	W1101 09:21:07.193314  256247 addons.go:248] addon dashboard should already be in state true
	I1101 09:21:07.194152  256247 host.go:66] Checking if "embed-certs-236314" exists ...
	I1101 09:21:07.194686  256247 cli_runner.go:164] Run: docker container inspect embed-certs-236314 --format={{.State.Status}}
	I1101 09:21:07.194893  256247 cli_runner.go:164] Run: docker container inspect embed-certs-236314 --format={{.State.Status}}
	I1101 09:21:07.196606  256247 out.go:179] * Verifying Kubernetes components...
	I1101 09:21:07.197961  256247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:21:07.224574  256247 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:21:07.225148  256247 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 09:21:07.226052  256247 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:21:07.226107  256247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:21:07.226176  256247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:07.227830  256247 addons.go:239] Setting addon default-storageclass=true in "embed-certs-236314"
	W1101 09:21:07.227853  256247 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:21:07.227901  256247 host.go:66] Checking if "embed-certs-236314" exists ...
	I1101 09:21:07.228117  256247 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Nov 01 09:20:28 no-preload-397460 crio[560]: time="2025-11-01T09:20:28.867021137Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:20:28 no-preload-397460 crio[560]: time="2025-11-01T09:20:28.870753974Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:20:28 no-preload-397460 crio[560]: time="2025-11-01T09:20:28.870786004Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.032956939Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d22c7ff2-4e04-4250-a489-54a647296eda name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.035884802Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=80e8d0f5-9f94-4c85-9ad0-e866d6a087ee name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.039177164Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk/dashboard-metrics-scraper" id=91030c83-554b-4a8c-a9d1-d447ae9a9b59 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.039317502Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.046277936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.046716124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.075060785Z" level=info msg="Created container 6dc0e61dffecc42ec75132038f2cd325ebcc36b066eb79f9d644c0a144d6656c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk/dashboard-metrics-scraper" id=91030c83-554b-4a8c-a9d1-d447ae9a9b59 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.075716902Z" level=info msg="Starting container: 6dc0e61dffecc42ec75132038f2cd325ebcc36b066eb79f9d644c0a144d6656c" id=bfc3a31b-0583-49f5-b70b-68d2b292af6c name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.077594718Z" level=info msg="Started container" PID=1745 containerID=6dc0e61dffecc42ec75132038f2cd325ebcc36b066eb79f9d644c0a144d6656c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk/dashboard-metrics-scraper id=bfc3a31b-0583-49f5-b70b-68d2b292af6c name=/runtime.v1.RuntimeService/StartContainer sandboxID=0fffdfd7a1db85635071e32b10e8e0adcd42227943175d281114edde3f437b74
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.133190526Z" level=info msg="Removing container: d2f3bc7f04da9b00ddee1f4f9529af8fb1606fb48b94f1c2e998026782488924" id=33aad358-f361-40ed-984d-78f16aee70e7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:20:39 no-preload-397460 crio[560]: time="2025-11-01T09:20:39.142992183Z" level=info msg="Removed container d2f3bc7f04da9b00ddee1f4f9529af8fb1606fb48b94f1c2e998026782488924: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk/dashboard-metrics-scraper" id=33aad358-f361-40ed-984d-78f16aee70e7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.160754599Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=945614cb-d248-4411-9665-6a034929fa52 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.161765852Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6bc6bd02-8499-45cc-962e-66e2a624e536 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.163071914Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=9fc7cf02-dbfe-46ce-bb91-d5e5e475e96c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.163224959Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.167829683Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.168028388Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9cc9774ec6cfe4c2139ee2bc1e61266e53450df6187adfe307c3fef6052b6722/merged/etc/passwd: no such file or directory"
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.168054784Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9cc9774ec6cfe4c2139ee2bc1e61266e53450df6187adfe307c3fef6052b6722/merged/etc/group: no such file or directory"
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.168327099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.198229242Z" level=info msg="Created container a32fea6f71cfef9621e6124cb4e8c00bce86e68063a63d6eead05eb0b30c3fd2: kube-system/storage-provisioner/storage-provisioner" id=9fc7cf02-dbfe-46ce-bb91-d5e5e475e96c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.198890909Z" level=info msg="Starting container: a32fea6f71cfef9621e6124cb4e8c00bce86e68063a63d6eead05eb0b30c3fd2" id=edcaba22-7191-4881-9053-e53519a68d8e name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:20:49 no-preload-397460 crio[560]: time="2025-11-01T09:20:49.200662128Z" level=info msg="Started container" PID=1759 containerID=a32fea6f71cfef9621e6124cb4e8c00bce86e68063a63d6eead05eb0b30c3fd2 description=kube-system/storage-provisioner/storage-provisioner id=edcaba22-7191-4881-9053-e53519a68d8e name=/runtime.v1.RuntimeService/StartContainer sandboxID=ef7af1c6a35d25eb2133ac2f491e4a3ab62a55ded9568dfe79df80175e1e9df1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a32fea6f71cfe       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   ef7af1c6a35d2       storage-provisioner                          kube-system
	6dc0e61dffecc       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago      Exited              dashboard-metrics-scraper   2                   0fffdfd7a1db8       dashboard-metrics-scraper-6ffb444bf9-mxwvk   kubernetes-dashboard
	6fcf14948dd8f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   63e6751da3226       kubernetes-dashboard-855c9754f9-89s5g        kubernetes-dashboard
	e4ac2478e800b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   9c0d28f1b654b       busybox                                      default
	7f8c011365813       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   41d16bfc949c3       coredns-66bc5c9577-z5578                     kube-system
	e68d2c72f2a28       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   ef7af1c6a35d2       storage-provisioner                          kube-system
	4e0ef8237ebe0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   ed241a8f8412d       kindnet-lddf5                                kube-system
	60a457721a2e9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           51 seconds ago      Running             kube-proxy                  0                   68578b483307b       kube-proxy-5kpft                             kube-system
	fd6c2e5673978       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   10169d1e422b2       etcd-no-preload-397460                       kube-system
	1e65eafe05118       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   82a2de5dc7246       kube-controller-manager-no-preload-397460    kube-system
	1b44261dff64d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   02178c5e6fb24       kube-apiserver-no-preload-397460             kube-system
	a154077d09e97       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   1751a62d0fb12       kube-scheduler-no-preload-397460             kube-system
	
	
	==> coredns [7f8c011365813d9b9a3fc4de470f15a11f84b8fc7448785cac2980d200ab6328] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44025 - 14277 "HINFO IN 5301831790333311994.5848890886534015879. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.524150284s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-397460
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-397460
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=no-preload-397460
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_19_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:19:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-397460
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:20:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:20:47 +0000   Sat, 01 Nov 2025 09:19:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:20:47 +0000   Sat, 01 Nov 2025 09:19:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:20:47 +0000   Sat, 01 Nov 2025 09:19:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:20:47 +0000   Sat, 01 Nov 2025 09:19:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-397460
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                782711df-25d2-4083-899f-9ab94eb16882
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-z5578                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-no-preload-397460                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-lddf5                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-no-preload-397460              250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-no-preload-397460     200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-5kpft                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-no-preload-397460              100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-mxwvk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-89s5g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node no-preload-397460 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node no-preload-397460 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s               kubelet          Node no-preload-397460 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           108s               node-controller  Node no-preload-397460 event: Registered Node no-preload-397460 in Controller
	  Normal  NodeReady                94s                kubelet          Node no-preload-397460 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node no-preload-397460 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node no-preload-397460 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node no-preload-397460 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node no-preload-397460 event: Registered Node no-preload-397460 in Controller
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.047726] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[Nov 1 08:38] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.215674] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.047939] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000023] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023915] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023867] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023880] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000033] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023884] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +4.031524] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +8.255123] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	
	
	==> etcd [fd6c2e567397890a7f512b409824d618cc40968b819d316cdae2a59eaaeee805] <==
	{"level":"warn","ts":"2025-11-01T09:20:16.643322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.650024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.659542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.666374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.673706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.682857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.689516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.696047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.703397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.710728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.718312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.727141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.736653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.750705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.758989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.765783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.772695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.780911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.788611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.796916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.803928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.823325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.830434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.838541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:20:16.892149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47654","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:21:09 up  1:03,  0 user,  load average: 2.99, 2.53, 1.55
	Linux no-preload-397460 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4e0ef8237ebe095c9933e907c2062eaeb0996060e1bec0bf09d7e957557ac6f6] <==
	I1101 09:20:18.555147       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:20:18.555455       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 09:20:18.555620       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:20:18.555642       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:20:18.555667       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:20:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:20:18.845543       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:20:18.845590       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:20:18.845604       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:20:18.845731       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:20:19.346847       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:20:19.346924       1 metrics.go:72] Registering metrics
	I1101 09:20:19.347014       1 controller.go:711] "Syncing nftables rules"
	I1101 09:20:28.845700       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:20:28.845776       1 main.go:301] handling current node
	I1101 09:20:38.846445       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:20:38.846473       1 main.go:301] handling current node
	I1101 09:20:48.845950       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:20:48.846003       1 main.go:301] handling current node
	I1101 09:20:58.852971       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:20:58.853010       1 main.go:301] handling current node
	I1101 09:21:08.854962       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:21:08.855010       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1b44261dff64d3c4c47d37a621f72a585d7a917cf393d156278c5cbcf49d2100] <==
	I1101 09:20:17.483023       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 09:20:17.483138       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:20:17.483274       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 09:20:17.483330       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:20:17.483553       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:20:17.483573       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:20:17.483580       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:20:17.483586       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:20:17.483810       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:20:17.483827       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:20:17.485798       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:20:17.490505       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 09:20:17.490724       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:20:17.504242       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 09:20:17.818462       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:20:17.852135       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:20:17.875851       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:20:17.885386       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:20:17.897101       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:20:17.940507       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.166.253"}
	I1101 09:20:17.952569       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.121.40"}
	I1101 09:20:18.388643       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:20:21.258023       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:20:21.307923       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:20:21.360399       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [1e65eafe05118922eba4075b65156c842b7bb2e5dc4b74d48586e74e8830e4ad] <==
	I1101 09:20:20.834459       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:20:20.838740       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:20:20.838895       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:20:20.840221       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:20:20.840322       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:20:20.844615       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 09:20:20.854119       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:20:20.854217       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 09:20:20.854390       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:20:20.854410       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:20:20.854418       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:20:20.854546       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:20:20.855390       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 09:20:20.864144       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:20:20.864192       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:20:20.864234       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:20:20.864241       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:20:20.864248       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:20:20.868102       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:20:20.868235       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:20:20.868427       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-397460"
	I1101 09:20:20.868495       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:20:20.871742       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:20:20.873915       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:20:20.881256       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-proxy [60a457721a2e92a962fa3f6f4b3be7038d081f4c316d20147a3d72dd30c7da7e] <==
	I1101 09:20:18.439846       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:20:18.523223       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:20:18.624167       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:20:18.624219       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1101 09:20:18.624319       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:20:18.644248       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:20:18.644304       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:20:18.649696       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:20:18.650068       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:20:18.650105       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:20:18.652073       1 config.go:309] "Starting node config controller"
	I1101 09:20:18.652228       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:20:18.652249       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:20:18.652308       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:20:18.652322       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:20:18.652349       1 config.go:200] "Starting service config controller"
	I1101 09:20:18.652362       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:20:18.652380       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:20:18.652395       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:20:18.752902       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:20:18.752927       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:20:18.752926       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a154077d09e972273696e9d1d20b891c240a792171f425c23b57e8599069bf1b] <==
	I1101 09:20:17.341218       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:20:18.383798       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:20:18.383851       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:20:18.390564       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:20:18.390581       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:20:18.390609       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:20:18.390610       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:20:18.390653       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:20:18.390676       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:20:18.391024       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:20:18.391431       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:20:18.490803       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:20:18.490803       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:20:18.491263       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:20:21 no-preload-397460 kubelet[705]: I1101 09:20:21.572848     705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb5v5\" (UniqueName: \"kubernetes.io/projected/0656e39c-eaf6-4c88-9863-16f00e262508-kube-api-access-cb5v5\") pod \"kubernetes-dashboard-855c9754f9-89s5g\" (UID: \"0656e39c-eaf6-4c88-9863-16f00e262508\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-89s5g"
	Nov 01 09:20:21 no-preload-397460 kubelet[705]: I1101 09:20:21.572900     705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll9d5\" (UniqueName: \"kubernetes.io/projected/be52c87c-207d-4398-a867-fb7f8c1859ab-kube-api-access-ll9d5\") pod \"dashboard-metrics-scraper-6ffb444bf9-mxwvk\" (UID: \"be52c87c-207d-4398-a867-fb7f8c1859ab\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk"
	Nov 01 09:20:21 no-preload-397460 kubelet[705]: I1101 09:20:21.572965     705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0656e39c-eaf6-4c88-9863-16f00e262508-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-89s5g\" (UID: \"0656e39c-eaf6-4c88-9863-16f00e262508\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-89s5g"
	Nov 01 09:20:25 no-preload-397460 kubelet[705]: I1101 09:20:25.087609     705 scope.go:117] "RemoveContainer" containerID="c05140c46ef4ef03bd67c307dc6d7294ebf6efa84832e5a599fe29a9d08311fc"
	Nov 01 09:20:26 no-preload-397460 kubelet[705]: I1101 09:20:26.093078     705 scope.go:117] "RemoveContainer" containerID="c05140c46ef4ef03bd67c307dc6d7294ebf6efa84832e5a599fe29a9d08311fc"
	Nov 01 09:20:26 no-preload-397460 kubelet[705]: I1101 09:20:26.093243     705 scope.go:117] "RemoveContainer" containerID="d2f3bc7f04da9b00ddee1f4f9529af8fb1606fb48b94f1c2e998026782488924"
	Nov 01 09:20:26 no-preload-397460 kubelet[705]: E1101 09:20:26.093517     705 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mxwvk_kubernetes-dashboard(be52c87c-207d-4398-a867-fb7f8c1859ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk" podUID="be52c87c-207d-4398-a867-fb7f8c1859ab"
	Nov 01 09:20:27 no-preload-397460 kubelet[705]: I1101 09:20:27.099267     705 scope.go:117] "RemoveContainer" containerID="d2f3bc7f04da9b00ddee1f4f9529af8fb1606fb48b94f1c2e998026782488924"
	Nov 01 09:20:27 no-preload-397460 kubelet[705]: E1101 09:20:27.099518     705 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mxwvk_kubernetes-dashboard(be52c87c-207d-4398-a867-fb7f8c1859ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk" podUID="be52c87c-207d-4398-a867-fb7f8c1859ab"
	Nov 01 09:20:28 no-preload-397460 kubelet[705]: I1101 09:20:28.103618     705 scope.go:117] "RemoveContainer" containerID="d2f3bc7f04da9b00ddee1f4f9529af8fb1606fb48b94f1c2e998026782488924"
	Nov 01 09:20:28 no-preload-397460 kubelet[705]: E1101 09:20:28.103796     705 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mxwvk_kubernetes-dashboard(be52c87c-207d-4398-a867-fb7f8c1859ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk" podUID="be52c87c-207d-4398-a867-fb7f8c1859ab"
	Nov 01 09:20:30 no-preload-397460 kubelet[705]: I1101 09:20:30.964239     705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-89s5g" podStartSLOduration=3.850255756 podStartE2EDuration="9.96421845s" podCreationTimestamp="2025-11-01 09:20:21 +0000 UTC" firstStartedPulling="2025-11-01 09:20:21.778371168 +0000 UTC m=+6.861819799" lastFinishedPulling="2025-11-01 09:20:27.892333863 +0000 UTC m=+12.975782493" observedRunningTime="2025-11-01 09:20:28.114332674 +0000 UTC m=+13.197781309" watchObservedRunningTime="2025-11-01 09:20:30.96421845 +0000 UTC m=+16.047667084"
	Nov 01 09:20:39 no-preload-397460 kubelet[705]: I1101 09:20:39.032419     705 scope.go:117] "RemoveContainer" containerID="d2f3bc7f04da9b00ddee1f4f9529af8fb1606fb48b94f1c2e998026782488924"
	Nov 01 09:20:39 no-preload-397460 kubelet[705]: I1101 09:20:39.131839     705 scope.go:117] "RemoveContainer" containerID="d2f3bc7f04da9b00ddee1f4f9529af8fb1606fb48b94f1c2e998026782488924"
	Nov 01 09:20:39 no-preload-397460 kubelet[705]: I1101 09:20:39.132078     705 scope.go:117] "RemoveContainer" containerID="6dc0e61dffecc42ec75132038f2cd325ebcc36b066eb79f9d644c0a144d6656c"
	Nov 01 09:20:39 no-preload-397460 kubelet[705]: E1101 09:20:39.132271     705 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mxwvk_kubernetes-dashboard(be52c87c-207d-4398-a867-fb7f8c1859ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk" podUID="be52c87c-207d-4398-a867-fb7f8c1859ab"
	Nov 01 09:20:46 no-preload-397460 kubelet[705]: I1101 09:20:46.899602     705 scope.go:117] "RemoveContainer" containerID="6dc0e61dffecc42ec75132038f2cd325ebcc36b066eb79f9d644c0a144d6656c"
	Nov 01 09:20:46 no-preload-397460 kubelet[705]: E1101 09:20:46.899895     705 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mxwvk_kubernetes-dashboard(be52c87c-207d-4398-a867-fb7f8c1859ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk" podUID="be52c87c-207d-4398-a867-fb7f8c1859ab"
	Nov 01 09:20:49 no-preload-397460 kubelet[705]: I1101 09:20:49.160405     705 scope.go:117] "RemoveContainer" containerID="e68d2c72f2a280bb06520d0abda5c4e4af21e514107ea8a34112ce09ad3363e9"
	Nov 01 09:20:58 no-preload-397460 kubelet[705]: I1101 09:20:58.032299     705 scope.go:117] "RemoveContainer" containerID="6dc0e61dffecc42ec75132038f2cd325ebcc36b066eb79f9d644c0a144d6656c"
	Nov 01 09:20:58 no-preload-397460 kubelet[705]: E1101 09:20:58.032498     705 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mxwvk_kubernetes-dashboard(be52c87c-207d-4398-a867-fb7f8c1859ab)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mxwvk" podUID="be52c87c-207d-4398-a867-fb7f8c1859ab"
	Nov 01 09:21:03 no-preload-397460 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:21:03 no-preload-397460 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:21:03 no-preload-397460 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:21:03 no-preload-397460 systemd[1]: kubelet.service: Consumed 1.678s CPU time.
	
	
	==> kubernetes-dashboard [6fcf14948dd8fe2f7f1fbb5fe1c692b7d291220499560edc384c983cbef16d2e] <==
	2025/11/01 09:20:27 Starting overwatch
	2025/11/01 09:20:27 Using namespace: kubernetes-dashboard
	2025/11/01 09:20:27 Using in-cluster config to connect to apiserver
	2025/11/01 09:20:27 Using secret token for csrf signing
	2025/11/01 09:20:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:20:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:20:27 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:20:27 Generating JWE encryption key
	2025/11/01 09:20:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:20:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:20:28 Initializing JWE encryption key from synchronized object
	2025/11/01 09:20:28 Creating in-cluster Sidecar client
	2025/11/01 09:20:28 Serving insecurely on HTTP port: 9090
	2025/11/01 09:20:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:20:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a32fea6f71cfef9621e6124cb4e8c00bce86e68063a63d6eead05eb0b30c3fd2] <==
	I1101 09:20:49.215710       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:20:49.227749       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:20:49.227809       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:20:49.230145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:52.684968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:56.946083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:00.544622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:03.598780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:06.622045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:06.627551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:21:06.627756       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:21:06.628013       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-397460_05688139-5583-422a-81e1-05289b080d64!
	I1101 09:21:06.628358       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b6e7d43-9839-4337-9d32-0088bf11071a", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-397460_05688139-5583-422a-81e1-05289b080d64 became leader
	W1101 09:21:06.631819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:06.636180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:21:06.729310       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-397460_05688139-5583-422a-81e1-05289b080d64!
	W1101 09:21:08.643164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:08.649595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e68d2c72f2a280bb06520d0abda5c4e4af21e514107ea8a34112ce09ad3363e9] <==
	I1101 09:20:18.416106       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:20:48.418413       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-397460 -n no-preload-397460
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-397460 -n no-preload-397460: exit status 2 (368.039803ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-397460 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-340756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-340756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (263.674411ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:21:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-340756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-340756
helpers_test.go:243: (dbg) docker inspect newest-cni-340756:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b",
	        "Created": "2025-11-01T09:21:23.482376732Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 266022,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:21:23.526295899Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b/hostname",
	        "HostsPath": "/var/lib/docker/containers/9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b/hosts",
	        "LogPath": "/var/lib/docker/containers/9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b/9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b-json.log",
	        "Name": "/newest-cni-340756",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-340756:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-340756",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b",
	                "LowerDir": "/var/lib/docker/overlay2/79a5b3fa0361a2a9c5d3edbeca3366aecf897b34708fba6c670fef7311204878-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/79a5b3fa0361a2a9c5d3edbeca3366aecf897b34708fba6c670fef7311204878/merged",
	                "UpperDir": "/var/lib/docker/overlay2/79a5b3fa0361a2a9c5d3edbeca3366aecf897b34708fba6c670fef7311204878/diff",
	                "WorkDir": "/var/lib/docker/overlay2/79a5b3fa0361a2a9c5d3edbeca3366aecf897b34708fba6c670fef7311204878/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-340756",
	                "Source": "/var/lib/docker/volumes/newest-cni-340756/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-340756",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-340756",
	                "name.minikube.sigs.k8s.io": "newest-cni-340756",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a104c53d675f760f5de5fce88affbe16350064b3c9152b86cf48b74e1aec48d3",
	            "SandboxKey": "/var/run/docker/netns/a104c53d675f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-340756": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:98:d2:b1:54:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6d98c8d1b523eaf92b0807c4ccbd2e833f29938a64f5a83fb094948eae42b694",
	                    "EndpointID": "c98c8f7feec12279e86da58dab37c4c1174d6e45231ed20786597655c8c5c210",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-340756",
	                        "9977e921720f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-340756 -n newest-cni-340756
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-340756 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-340756 logs -n 25: (1.053071668s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-303094 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-303094       │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ addons  │ enable metrics-server -p no-preload-397460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │                     │
	│ delete  │ -p cert-expiration-303094                                                                                                                                                                                                                     │ cert-expiration-303094       │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ stop    │ -p no-preload-397460 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p embed-certs-236314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-152344 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ start   │ -p old-k8s-version-152344 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p no-preload-397460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-236314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │                     │
	│ stop    │ -p embed-certs-236314 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-236314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p embed-certs-236314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:21 UTC │
	│ image   │ old-k8s-version-152344 image list --format=json                                                                                                                                                                                               │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p old-k8s-version-152344 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ image   │ no-preload-397460 image list --format=json                                                                                                                                                                                                    │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p no-preload-397460 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ delete  │ -p old-k8s-version-152344                                                                                                                                                                                                                     │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p old-k8s-version-152344                                                                                                                                                                                                                     │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p no-preload-397460                                                                                                                                                                                                                          │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p disable-driver-mounts-366530                                                                                                                                                                                                               │ disable-driver-mounts-366530 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ start   │ -p default-k8s-diff-port-648641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-648641 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ delete  │ -p no-preload-397460                                                                                                                                                                                                                          │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ start   │ -p newest-cni-340756 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-340756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:21:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:21:16.432075  263568 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:21:16.432846  263568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:21:16.432891  263568 out.go:374] Setting ErrFile to fd 2...
	I1101 09:21:16.432898  263568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:21:16.433460  263568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:21:16.434584  263568 out.go:368] Setting JSON to false
	I1101 09:21:16.436204  263568 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3824,"bootTime":1761985052,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:21:16.436384  263568 start.go:143] virtualization: kvm guest
	I1101 09:21:16.463856  263568 out.go:179] * [newest-cni-340756] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:21:16.469043  263568 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:21:16.469051  263568 notify.go:221] Checking for updates...
	I1101 09:21:16.473202  263568 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:21:16.475197  263568 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:21:16.477469  263568 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:21:16.479064  263568 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:21:16.482076  263568 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:21:16.485231  263568 config.go:182] Loaded profile config "default-k8s-diff-port-648641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:16.485374  263568 config.go:182] Loaded profile config "embed-certs-236314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:16.485491  263568 config.go:182] Loaded profile config "kubernetes-upgrade-846924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:16.485633  263568 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:21:16.524156  263568 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:21:16.524472  263568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:21:16.630320  263568 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:67 SystemTime:2025-11-01 09:21:16.616732342 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:21:16.630455  263568 docker.go:319] overlay module found
	I1101 09:21:16.632143  263568 out.go:179] * Using the docker driver based on user configuration
	I1101 09:21:16.634719  263568 start.go:309] selected driver: docker
	I1101 09:21:16.634742  263568 start.go:930] validating driver "docker" against <nil>
	I1101 09:21:16.634759  263568 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:21:16.636387  263568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:21:16.721124  263568 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:69 SystemTime:2025-11-01 09:21:16.708216677 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:21:16.721343  263568 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1101 09:21:16.721379  263568 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1101 09:21:16.721687  263568 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:21:16.753372  263568 out.go:179] * Using Docker driver with root privileges
	I1101 09:21:16.775951  263568 cni.go:84] Creating CNI manager for ""
	I1101 09:21:16.776069  263568 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:21:16.776087  263568 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:21:16.776251  263568 start.go:353] cluster config:
	{Name:newest-cni-340756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-340756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:21:16.798022  263568 out.go:179] * Starting "newest-cni-340756" primary control-plane node in "newest-cni-340756" cluster
	I1101 09:21:16.804595  263568 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:21:16.807047  263568 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:21:16.808657  263568 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:21:16.808716  263568 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:21:16.808751  263568 cache.go:59] Caching tarball of preloaded images
	I1101 09:21:16.808757  263568 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:21:16.808899  263568 preload.go:233] Found /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:21:16.808918  263568 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:21:16.809051  263568 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/config.json ...
	I1101 09:21:16.809077  263568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/config.json: {Name:mk5907ec4f9df3976ba620184d9e796a35524126 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:16.838145  263568 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:21:16.838175  263568 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:21:16.838197  263568 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:21:16.838225  263568 start.go:360] acquireMachinesLock for newest-cni-340756: {Name:mk88172481da3b8a8d740f548867bdcc84a2d863 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:21:16.838330  263568 start.go:364] duration metric: took 81.283µs to acquireMachinesLock for "newest-cni-340756"
	I1101 09:21:16.838359  263568 start.go:93] Provisioning new machine with config: &{Name:newest-cni-340756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-340756 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:21:16.838440  263568 start.go:125] createHost starting for "" (driver="docker")
	W1101 09:21:14.570607  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	W1101 09:21:16.573319  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	W1101 09:21:19.069347  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	I1101 09:21:16.189640  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:16.190173  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:16.190234  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:16.190294  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:16.226313  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:16.226342  216020 cri.go:89] found id: ""
	I1101 09:21:16.226352  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:16.226408  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:16.231537  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:16.231607  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:16.266019  216020 cri.go:89] found id: ""
	I1101 09:21:16.266047  216020 logs.go:282] 0 containers: []
	W1101 09:21:16.266058  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:16.266066  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:16.266127  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:16.302134  216020 cri.go:89] found id: ""
	I1101 09:21:16.302164  216020 logs.go:282] 0 containers: []
	W1101 09:21:16.302176  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:16.302183  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:16.302241  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:16.332061  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:16.332085  216020 cri.go:89] found id: ""
	I1101 09:21:16.332095  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:16.332150  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:16.337277  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:16.337342  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:16.376616  216020 cri.go:89] found id: ""
	I1101 09:21:16.376645  216020 logs.go:282] 0 containers: []
	W1101 09:21:16.376656  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:16.376664  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:16.376726  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:16.413540  216020 cri.go:89] found id: "df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:16.413565  216020 cri.go:89] found id: ""
	I1101 09:21:16.413575  216020 logs.go:282] 1 containers: [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd]
	I1101 09:21:16.413631  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:16.418883  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:16.418954  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:16.454886  216020 cri.go:89] found id: ""
	I1101 09:21:16.454916  216020 logs.go:282] 0 containers: []
	W1101 09:21:16.454932  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:16.454940  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:16.455000  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:16.502922  216020 cri.go:89] found id: ""
	I1101 09:21:16.502946  216020 logs.go:282] 0 containers: []
	W1101 09:21:16.502966  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:16.502979  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:16.502993  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:16.676611  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:16.676660  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:16.701528  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:16.701584  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:16.772358  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:16.772385  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:16.772403  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:16.813480  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:16.813522  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:16.895592  216020 logs.go:123] Gathering logs for kube-controller-manager [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd] ...
	I1101 09:21:16.895634  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:16.931448  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:16.931486  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:17.011546  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:17.011592  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:19.550953  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:19.551460  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:19.551521  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:19.551586  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:19.591105  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:19.591131  216020 cri.go:89] found id: ""
	I1101 09:21:19.591141  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:19.591200  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:19.597977  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:19.598052  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:19.634859  216020 cri.go:89] found id: ""
	I1101 09:21:19.634907  216020 logs.go:282] 0 containers: []
	W1101 09:21:19.634918  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:19.634926  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:19.634991  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:19.674355  216020 cri.go:89] found id: ""
	I1101 09:21:19.674384  216020 logs.go:282] 0 containers: []
	W1101 09:21:19.674396  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:19.674404  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:19.674462  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:19.712970  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:19.712994  216020 cri.go:89] found id: ""
	I1101 09:21:19.713004  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:19.713069  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:19.718651  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:19.718745  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:19.759757  216020 cri.go:89] found id: ""
	I1101 09:21:19.759792  216020 logs.go:282] 0 containers: []
	W1101 09:21:19.759803  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:19.759811  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:19.759900  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:19.797006  216020 cri.go:89] found id: "b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:19.797035  216020 cri.go:89] found id: "df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:19.797042  216020 cri.go:89] found id: ""
	I1101 09:21:19.797053  216020 logs.go:282] 2 containers: [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd]
	I1101 09:21:19.797124  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:19.802830  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:19.807973  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:19.808057  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:16.486132  262357 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-648641:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.512570954s)
	I1101 09:21:16.486168  262357 kic.go:203] duration metric: took 4.512735838s to extract preloaded images to volume ...
	W1101 09:21:16.486267  262357 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 09:21:16.486310  262357 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 09:21:16.486349  262357 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:21:16.597382  262357 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-648641 --name default-k8s-diff-port-648641 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-648641 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-648641 --network default-k8s-diff-port-648641 --ip 192.168.103.2 --volume default-k8s-diff-port-648641:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:21:17.147634  262357 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-648641 --format={{.State.Running}}
	I1101 09:21:17.171138  262357 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-648641 --format={{.State.Status}}
	I1101 09:21:17.192030  262357 cli_runner.go:164] Run: docker exec default-k8s-diff-port-648641 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:21:17.248917  262357 oci.go:144] the created container "default-k8s-diff-port-648641" has a running status.
	I1101 09:21:17.248965  262357 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/default-k8s-diff-port-648641/id_rsa...
	I1101 09:21:17.431433  262357 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-5913/.minikube/machines/default-k8s-diff-port-648641/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:21:17.466511  262357 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-648641 --format={{.State.Status}}
	I1101 09:21:17.487440  262357 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:21:17.487463  262357 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-648641 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:21:17.561751  262357 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-648641 --format={{.State.Status}}
	I1101 09:21:17.587513  262357 machine.go:94] provisionDockerMachine start ...
	I1101 09:21:17.587618  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:17.615964  262357 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:17.616207  262357 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1101 09:21:17.616217  262357 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:21:17.772779  262357 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-648641
	
	I1101 09:21:17.772810  262357 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-648641"
	I1101 09:21:17.772955  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:17.799035  262357 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:17.799440  262357 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1101 09:21:17.799462  262357 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-648641 && echo "default-k8s-diff-port-648641" | sudo tee /etc/hostname
	I1101 09:21:17.969701  262357 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-648641
	
	I1101 09:21:17.969822  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:17.998371  262357 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:17.999222  262357 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1101 09:21:17.999273  262357 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-648641' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-648641/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-648641' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:21:18.165152  262357 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:21:18.165188  262357 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5913/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5913/.minikube}
	I1101 09:21:18.165214  262357 ubuntu.go:190] setting up certificates
	I1101 09:21:18.165227  262357 provision.go:84] configureAuth start
	I1101 09:21:18.165300  262357 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-648641
	I1101 09:21:18.190725  262357 provision.go:143] copyHostCerts
	I1101 09:21:18.190827  262357 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem, removing ...
	I1101 09:21:18.190844  262357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem
	I1101 09:21:18.190924  262357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem (1078 bytes)
	I1101 09:21:18.191055  262357 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem, removing ...
	I1101 09:21:18.191067  262357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem
	I1101 09:21:18.191108  262357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem (1123 bytes)
	I1101 09:21:18.191198  262357 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem, removing ...
	I1101 09:21:18.191209  262357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem
	I1101 09:21:18.191244  262357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem (1675 bytes)
	I1101 09:21:18.191324  262357 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-648641 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-648641 localhost minikube]
	I1101 09:21:18.738803  262357 provision.go:177] copyRemoteCerts
	I1101 09:21:18.738895  262357 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:21:18.738942  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:18.761365  262357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/default-k8s-diff-port-648641/id_rsa Username:docker}
	I1101 09:21:18.877913  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:21:18.906075  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:21:18.932027  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 09:21:18.959236  262357 provision.go:87] duration metric: took 793.970541ms to configureAuth
	I1101 09:21:18.959276  262357 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:21:18.959483  262357 config.go:182] Loaded profile config "default-k8s-diff-port-648641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:18.959635  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:18.983609  262357 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:18.983949  262357 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1101 09:21:18.984252  262357 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:21:19.308095  262357 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:21:19.308130  262357 machine.go:97] duration metric: took 1.720592129s to provisionDockerMachine
	I1101 09:21:19.308144  262357 client.go:176] duration metric: took 7.925831106s to LocalClient.Create
	I1101 09:21:19.308166  262357 start.go:167] duration metric: took 7.925916594s to libmachine.API.Create "default-k8s-diff-port-648641"
	I1101 09:21:19.308176  262357 start.go:293] postStartSetup for "default-k8s-diff-port-648641" (driver="docker")
	I1101 09:21:19.308195  262357 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:21:19.308281  262357 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:21:19.308328  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:19.331386  262357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/default-k8s-diff-port-648641/id_rsa Username:docker}
	I1101 09:21:19.444341  262357 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:21:19.449749  262357 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:21:19.449784  262357 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:21:19.449798  262357 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/addons for local assets ...
	I1101 09:21:19.449854  262357 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/files for local assets ...
	I1101 09:21:19.449995  262357 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem -> 94142.pem in /etc/ssl/certs
	I1101 09:21:19.450122  262357 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:21:19.461479  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:21:19.491051  262357 start.go:296] duration metric: took 182.858998ms for postStartSetup
	I1101 09:21:19.491534  262357 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-648641
	I1101 09:21:19.517028  262357 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/config.json ...
	I1101 09:21:19.517349  262357 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:21:19.517408  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:19.541507  262357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/default-k8s-diff-port-648641/id_rsa Username:docker}
	I1101 09:21:19.652734  262357 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:21:19.659157  262357 start.go:128] duration metric: took 8.28005502s to createHost
	I1101 09:21:19.659188  262357 start.go:83] releasing machines lock for "default-k8s-diff-port-648641", held for 8.280250134s
	I1101 09:21:19.659267  262357 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-648641
	I1101 09:21:19.684195  262357 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:21:19.684481  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:19.684123  262357 ssh_runner.go:195] Run: cat /version.json
	I1101 09:21:19.684718  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:19.710690  262357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/default-k8s-diff-port-648641/id_rsa Username:docker}
	I1101 09:21:19.712349  262357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/default-k8s-diff-port-648641/id_rsa Username:docker}
	I1101 09:21:19.823400  262357 ssh_runner.go:195] Run: systemctl --version
	I1101 09:21:19.908689  262357 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:21:19.964047  262357 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:21:19.970327  262357 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:21:19.970400  262357 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:21:20.014665  262357 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:21:20.014764  262357 start.go:496] detecting cgroup driver to use...
	I1101 09:21:20.014930  262357 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:21:20.015010  262357 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:21:20.038534  262357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:21:20.057741  262357 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:21:20.057798  262357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:21:20.085136  262357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:21:20.109896  262357 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:21:20.247952  262357 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:21:20.391536  262357 docker.go:234] disabling docker service ...
	I1101 09:21:20.391616  262357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:21:20.420963  262357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:21:20.440313  262357 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:21:20.570652  262357 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:21:20.691676  262357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:21:20.709716  262357 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:21:20.730739  262357 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:21:20.730822  262357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:20.842859  262357 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:21:20.842984  262357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:20.904660  262357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:20.960656  262357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:21.017527  262357 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:21:21.027075  262357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:21.038255  262357 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:21.060709  262357 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:21.087186  262357 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:21:21.096178  262357 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:21:21.104672  262357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:21:16.840701  263568 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:21:16.841054  263568 start.go:159] libmachine.API.Create for "newest-cni-340756" (driver="docker")
	I1101 09:21:16.841107  263568 client.go:173] LocalClient.Create starting
	I1101 09:21:16.841182  263568 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem
	I1101 09:21:16.841215  263568 main.go:143] libmachine: Decoding PEM data...
	I1101 09:21:16.841233  263568 main.go:143] libmachine: Parsing certificate...
	I1101 09:21:16.841282  263568 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem
	I1101 09:21:16.841300  263568 main.go:143] libmachine: Decoding PEM data...
	I1101 09:21:16.841312  263568 main.go:143] libmachine: Parsing certificate...
	I1101 09:21:16.841698  263568 cli_runner.go:164] Run: docker network inspect newest-cni-340756 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:21:16.867231  263568 cli_runner.go:211] docker network inspect newest-cni-340756 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:21:16.867316  263568 network_create.go:284] running [docker network inspect newest-cni-340756] to gather additional debugging logs...
	I1101 09:21:16.867341  263568 cli_runner.go:164] Run: docker network inspect newest-cni-340756
	W1101 09:21:16.892738  263568 cli_runner.go:211] docker network inspect newest-cni-340756 returned with exit code 1
	I1101 09:21:16.892791  263568 network_create.go:287] error running [docker network inspect newest-cni-340756]: docker network inspect newest-cni-340756: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-340756 not found
	I1101 09:21:16.892808  263568 network_create.go:289] output of [docker network inspect newest-cni-340756]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-340756 not found
	
	** /stderr **
	I1101 09:21:16.892947  263568 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:21:16.918760  263568 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5f44df6b5a5b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:38:92:20:b3:ae} reservation:<nil>}
	I1101 09:21:16.919791  263568 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ec772021a1d5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:14:7e:99:b1:e5} reservation:<nil>}
	I1101 09:21:16.920676  263568 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6ef14c0d2e1a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:5b:36:d5:85:2b} reservation:<nil>}
	I1101 09:21:16.921315  263568 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-2f536846b22c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:b1:bc:21:0c:bb} reservation:<nil>}
	I1101 09:21:16.921649  263568 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-c9feba7a919c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a6:96:07:ef:ec:1e} reservation:<nil>}
	I1101 09:21:16.922539  263568 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dc8f80}
	I1101 09:21:16.922561  263568 network_create.go:124] attempt to create docker network newest-cni-340756 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1101 09:21:16.922611  263568 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-340756 newest-cni-340756
	I1101 09:21:17.005643  263568 network_create.go:108] docker network newest-cni-340756 192.168.94.0/24 created
	I1101 09:21:17.005678  263568 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-340756" container
	I1101 09:21:17.005776  263568 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:21:17.029571  263568 cli_runner.go:164] Run: docker volume create newest-cni-340756 --label name.minikube.sigs.k8s.io=newest-cni-340756 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:21:17.051568  263568 oci.go:103] Successfully created a docker volume newest-cni-340756
	I1101 09:21:17.051642  263568 cli_runner.go:164] Run: docker run --rm --name newest-cni-340756-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-340756 --entrypoint /usr/bin/test -v newest-cni-340756:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:21:17.571227  263568 oci.go:107] Successfully prepared a docker volume newest-cni-340756
	I1101 09:21:17.571380  263568 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:21:17.571414  263568 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:21:17.571510  263568 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-340756:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 09:21:21.188552  262357 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:21:23.490005  262357 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.301416841s)
	I1101 09:21:23.490041  262357 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:21:23.490092  262357 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:21:23.494842  262357 start.go:564] Will wait 60s for crictl version
	I1101 09:21:23.494920  262357 ssh_runner.go:195] Run: which crictl
	I1101 09:21:23.499217  262357 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:21:23.534142  262357 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:21:23.534230  262357 ssh_runner.go:195] Run: crio --version
	I1101 09:21:23.572089  262357 ssh_runner.go:195] Run: crio --version
	I1101 09:21:23.609461  262357 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1101 09:21:21.584988  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	W1101 09:21:24.079457  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	I1101 09:21:19.847224  216020 cri.go:89] found id: ""
	I1101 09:21:19.847252  216020 logs.go:282] 0 containers: []
	W1101 09:21:19.847262  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:19.847268  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:19.847328  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:19.882017  216020 cri.go:89] found id: ""
	I1101 09:21:19.882045  216020 logs.go:282] 0 containers: []
	W1101 09:21:19.882056  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:19.882076  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:19.882090  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:19.925584  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:19.925618  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:20.064928  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:20.064965  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:20.156656  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:20.156694  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:20.156710  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:20.207508  216020 logs.go:123] Gathering logs for kube-controller-manager [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd] ...
	I1101 09:21:20.207557  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:20.248951  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:20.248986  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:20.271836  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:20.271902  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:20.353127  216020 logs.go:123] Gathering logs for kube-controller-manager [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867] ...
	I1101 09:21:20.353172  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:20.387495  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:20.387535  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:22.975635  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:22.976133  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:22.976185  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:22.976240  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:23.007894  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:23.007922  216020 cri.go:89] found id: ""
	I1101 09:21:23.007932  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:23.008067  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:23.012581  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:23.012653  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:23.041211  216020 cri.go:89] found id: ""
	I1101 09:21:23.041238  216020 logs.go:282] 0 containers: []
	W1101 09:21:23.041247  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:23.041253  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:23.041303  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:23.074549  216020 cri.go:89] found id: ""
	I1101 09:21:23.074576  216020 logs.go:282] 0 containers: []
	W1101 09:21:23.074587  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:23.074595  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:23.074651  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:23.104257  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:23.104283  216020 cri.go:89] found id: ""
	I1101 09:21:23.104307  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:23.104368  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:23.108627  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:23.108696  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:23.138311  216020 cri.go:89] found id: ""
	I1101 09:21:23.138335  216020 logs.go:282] 0 containers: []
	W1101 09:21:23.138343  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:23.138349  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:23.138403  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:23.166959  216020 cri.go:89] found id: "b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:23.166980  216020 cri.go:89] found id: "df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:23.166986  216020 cri.go:89] found id: ""
	I1101 09:21:23.167002  216020 logs.go:282] 2 containers: [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd]
	I1101 09:21:23.167062  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:23.171247  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:23.175370  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:23.175428  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:23.205749  216020 cri.go:89] found id: ""
	I1101 09:21:23.205776  216020 logs.go:282] 0 containers: []
	W1101 09:21:23.205787  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:23.205795  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:23.205846  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:23.234046  216020 cri.go:89] found id: ""
	I1101 09:21:23.234070  216020 logs.go:282] 0 containers: []
	W1101 09:21:23.234079  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:23.234095  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:23.234106  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:23.323713  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:23.323762  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:23.358473  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:23.358516  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:23.436851  216020 logs.go:123] Gathering logs for kube-controller-manager [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867] ...
	I1101 09:21:23.436901  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:23.468130  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:23.468165  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:23.489368  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:23.489413  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:23.567586  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:23.567611  216020 logs.go:123] Gathering logs for kube-controller-manager [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd] ...
	I1101 09:21:23.567627  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:23.599213  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:23.599241  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:23.679306  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:23.679340  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:23.611083  262357 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-648641 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:21:23.635605  262357 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1101 09:21:23.640214  262357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:21:23.651879  262357 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-648641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-648641 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:21:23.652020  262357 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:21:23.652078  262357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:21:23.693412  262357 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:21:23.693433  262357 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:21:23.693481  262357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:21:23.722392  262357 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:21:23.722418  262357 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:21:23.722430  262357 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1101 09:21:23.722539  262357 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-648641 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-648641 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:21:23.722663  262357 ssh_runner.go:195] Run: crio config
	I1101 09:21:23.772026  262357 cni.go:84] Creating CNI manager for ""
	I1101 09:21:23.772055  262357 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:21:23.772085  262357 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:21:23.772116  262357 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-648641 NodeName:default-k8s-diff-port-648641 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:21:23.772283  262357 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-648641"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:21:23.772357  262357 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:21:23.781719  262357 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:21:23.781807  262357 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:21:23.790849  262357 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1101 09:21:23.806683  262357 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:21:23.824637  262357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1101 09:21:23.840504  262357 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:21:23.845601  262357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:21:23.857551  262357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:21:23.965822  262357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:21:23.992771  262357 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641 for IP: 192.168.103.2
	I1101 09:21:23.992799  262357 certs.go:195] generating shared ca certs ...
	I1101 09:21:23.992821  262357 certs.go:227] acquiring lock for ca certs: {Name:mkfdee6a84670347521013ebeef165551380cb9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:23.993014  262357 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key
	I1101 09:21:23.993072  262357 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key
	I1101 09:21:23.993084  262357 certs.go:257] generating profile certs ...
	I1101 09:21:23.993150  262357 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/client.key
	I1101 09:21:23.993167  262357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/client.crt with IP's: []
	I1101 09:21:24.649794  262357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/client.crt ...
	I1101 09:21:24.649827  262357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/client.crt: {Name:mk48d159ec661a892ecb6482cee9b66b0b9ea0cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:24.650037  262357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/client.key ...
	I1101 09:21:24.650053  262357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/client.key: {Name:mk9f79181b551fb98ff4a9e4e23d7afc8657fc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:24.650183  262357 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.key.7ba7d8ea
	I1101 09:21:24.650200  262357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.crt.7ba7d8ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1101 09:21:25.253567  262357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.crt.7ba7d8ea ...
	I1101 09:21:25.253596  262357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.crt.7ba7d8ea: {Name:mk4398b189fdd3bab322efcd074e4028b1144897 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:25.253792  262357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.key.7ba7d8ea ...
	I1101 09:21:25.253811  262357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.key.7ba7d8ea: {Name:mk97b095c18c480b9b17921ac02ed3850338c147 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:25.253948  262357 certs.go:382] copying /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.crt.7ba7d8ea -> /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.crt
	I1101 09:21:25.254066  262357 certs.go:386] copying /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.key.7ba7d8ea -> /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.key
	I1101 09:21:25.254162  262357 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/proxy-client.key
	I1101 09:21:25.254181  262357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/proxy-client.crt with IP's: []
	I1101 09:21:25.601921  262357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/proxy-client.crt ...
	I1101 09:21:25.601948  262357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/proxy-client.crt: {Name:mk5eefec9a7f08e903ffd816191f14fd7bac2543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:25.602107  262357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/proxy-client.key ...
	I1101 09:21:25.602121  262357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/proxy-client.key: {Name:mk91946f6a52fc49fa1ca52a724a3d3ae7a3f56f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:25.602287  262357 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem (1338 bytes)
	W1101 09:21:25.602319  262357 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414_empty.pem, impossibly tiny 0 bytes
	I1101 09:21:25.602329  262357 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:21:25.602352  262357 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:21:25.602380  262357 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:21:25.602401  262357 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem (1675 bytes)
	I1101 09:21:25.602447  262357 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:21:25.603018  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:21:25.622661  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:21:25.641942  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:21:25.661192  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:21:25.682676  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 09:21:25.703490  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:21:25.723930  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:21:25.743020  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:21:25.763306  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:21:25.786852  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem --> /usr/share/ca-certificates/9414.pem (1338 bytes)
	I1101 09:21:25.806545  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /usr/share/ca-certificates/94142.pem (1708 bytes)
	I1101 09:21:25.825310  262357 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:21:25.838925  262357 ssh_runner.go:195] Run: openssl version
	I1101 09:21:25.845240  262357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:21:25.854365  262357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:21:25.858427  262357 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:21:25.858489  262357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:21:25.894021  262357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:21:25.903233  262357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9414.pem && ln -fs /usr/share/ca-certificates/9414.pem /etc/ssl/certs/9414.pem"
	I1101 09:21:25.912055  262357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9414.pem
	I1101 09:21:25.916046  262357 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:35 /usr/share/ca-certificates/9414.pem
	I1101 09:21:25.916107  262357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9414.pem
	I1101 09:21:25.952115  262357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9414.pem /etc/ssl/certs/51391683.0"
	I1101 09:21:25.961907  262357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94142.pem && ln -fs /usr/share/ca-certificates/94142.pem /etc/ssl/certs/94142.pem"
	I1101 09:21:25.971569  262357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94142.pem
	I1101 09:21:25.976077  262357 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:35 /usr/share/ca-certificates/94142.pem
	I1101 09:21:25.976143  262357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94142.pem
	I1101 09:21:26.014698  262357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94142.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:21:26.024531  262357 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:21:26.028718  262357 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:21:26.028790  262357 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-648641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-648641 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:21:26.028908  262357 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:21:26.028983  262357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:21:26.058470  262357 cri.go:89] found id: ""
	I1101 09:21:26.058543  262357 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:21:26.067388  262357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:21:26.075783  262357 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:21:26.075839  262357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:21:26.084220  262357 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:21:26.084240  262357 kubeadm.go:158] found existing configuration files:
	
	I1101 09:21:26.084284  262357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1101 09:21:26.092406  262357 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:21:26.092466  262357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:21:26.100628  262357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1101 09:21:26.109119  262357 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:21:26.109195  262357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:21:26.120789  262357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1101 09:21:26.129968  262357 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:21:26.130023  262357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:21:26.137851  262357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1101 09:21:26.145759  262357 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:21:26.145812  262357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:21:26.153433  262357 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:21:23.377195  263568 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-340756:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.805630806s)
	I1101 09:21:23.377232  263568 kic.go:203] duration metric: took 5.805816487s to extract preloaded images to volume ...
	W1101 09:21:23.377352  263568 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 09:21:23.377395  263568 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 09:21:23.377444  263568 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:21:23.460854  263568 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-340756 --name newest-cni-340756 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-340756 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-340756 --network newest-cni-340756 --ip 192.168.94.2 --volume newest-cni-340756:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:21:23.784029  263568 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Running}}
	I1101 09:21:23.805818  263568 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Status}}
	I1101 09:21:23.826649  263568 cli_runner.go:164] Run: docker exec newest-cni-340756 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:21:23.877209  263568 oci.go:144] the created container "newest-cni-340756" has a running status.
	I1101 09:21:23.877242  263568 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa...
	I1101 09:21:24.024586  263568 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:21:24.059584  263568 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Status}}
	I1101 09:21:24.086789  263568 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:21:24.086828  263568 kic_runner.go:114] Args: [docker exec --privileged newest-cni-340756 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:21:24.140069  263568 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Status}}
	I1101 09:21:24.166009  263568 machine.go:94] provisionDockerMachine start ...
	I1101 09:21:24.166103  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:24.187117  263568 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:24.187461  263568 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1101 09:21:24.187485  263568 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:21:24.340065  263568 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-340756
	
	I1101 09:21:24.340091  263568 ubuntu.go:182] provisioning hostname "newest-cni-340756"
	I1101 09:21:24.340152  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:24.360729  263568 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:24.361089  263568 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1101 09:21:24.361115  263568 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-340756 && echo "newest-cni-340756" | sudo tee /etc/hostname
	I1101 09:21:24.516930  263568 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-340756
	
	I1101 09:21:24.517026  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:24.538109  263568 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:24.538337  263568 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1101 09:21:24.538357  263568 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-340756' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-340756/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-340756' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:21:24.683297  263568 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:21:24.683332  263568 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5913/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5913/.minikube}
	I1101 09:21:24.683401  263568 ubuntu.go:190] setting up certificates
	I1101 09:21:24.683417  263568 provision.go:84] configureAuth start
	I1101 09:21:24.683487  263568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-340756
	I1101 09:21:24.703460  263568 provision.go:143] copyHostCerts
	I1101 09:21:24.703533  263568 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem, removing ...
	I1101 09:21:24.703549  263568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem
	I1101 09:21:24.703639  263568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem (1078 bytes)
	I1101 09:21:24.703787  263568 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem, removing ...
	I1101 09:21:24.703801  263568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem
	I1101 09:21:24.703847  263568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem (1123 bytes)
	I1101 09:21:24.703974  263568 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem, removing ...
	I1101 09:21:24.703987  263568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem
	I1101 09:21:24.704026  263568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem (1675 bytes)
	I1101 09:21:24.704120  263568 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem org=jenkins.newest-cni-340756 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-340756]
	I1101 09:21:24.968401  263568 provision.go:177] copyRemoteCerts
	I1101 09:21:24.968462  263568 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:21:24.968516  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:24.987661  263568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:21:25.089711  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:21:25.110667  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 09:21:25.129152  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:21:25.147498  263568 provision.go:87] duration metric: took 464.066493ms to configureAuth
	I1101 09:21:25.147532  263568 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:21:25.147731  263568 config.go:182] Loaded profile config "newest-cni-340756": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:25.147837  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:25.169430  263568 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:25.169701  263568 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1101 09:21:25.169738  263568 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:21:25.435553  263568 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:21:25.435579  263568 machine.go:97] duration metric: took 1.269545706s to provisionDockerMachine
	I1101 09:21:25.435591  263568 client.go:176] duration metric: took 8.594473608s to LocalClient.Create
	I1101 09:21:25.435611  263568 start.go:167] duration metric: took 8.594560498s to libmachine.API.Create "newest-cni-340756"
	I1101 09:21:25.435620  263568 start.go:293] postStartSetup for "newest-cni-340756" (driver="docker")
	I1101 09:21:25.435633  263568 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:21:25.435699  263568 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:21:25.435752  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:25.455448  263568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:21:25.559491  263568 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:21:25.563355  263568 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:21:25.563393  263568 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:21:25.563406  263568 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/addons for local assets ...
	I1101 09:21:25.563474  263568 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/files for local assets ...
	I1101 09:21:25.563574  263568 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem -> 94142.pem in /etc/ssl/certs
	I1101 09:21:25.563698  263568 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:21:25.573078  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:21:25.594255  263568 start.go:296] duration metric: took 158.619429ms for postStartSetup
	I1101 09:21:25.594641  263568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-340756
	I1101 09:21:25.613714  263568 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/config.json ...
	I1101 09:21:25.614000  263568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:21:25.614044  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:25.633601  263568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:21:25.732786  263568 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:21:25.737820  263568 start.go:128] duration metric: took 8.899363887s to createHost
	I1101 09:21:25.737848  263568 start.go:83] releasing machines lock for "newest-cni-340756", held for 8.899505816s
	I1101 09:21:25.737960  263568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-340756
	I1101 09:21:25.757417  263568 ssh_runner.go:195] Run: cat /version.json
	I1101 09:21:25.757447  263568 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:21:25.757471  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:25.757513  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:25.778247  263568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:21:25.779020  263568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:21:25.932148  263568 ssh_runner.go:195] Run: systemctl --version
	I1101 09:21:25.938684  263568 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:21:25.975047  263568 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:21:25.979859  263568 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:21:25.979953  263568 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:21:26.007501  263568 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:21:26.007525  263568 start.go:496] detecting cgroup driver to use...
	I1101 09:21:26.007559  263568 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:21:26.007611  263568 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:21:26.025281  263568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:21:26.038138  263568 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:21:26.038192  263568 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:21:26.057086  263568 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:21:26.076252  263568 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:21:26.171183  263568 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:21:26.283510  263568 docker.go:234] disabling docker service ...
	I1101 09:21:26.283575  263568 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:21:26.309027  263568 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:21:26.324709  263568 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:21:26.426167  263568 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:21:26.527590  263568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:21:26.541712  263568 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:21:26.557160  263568 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:21:26.557225  263568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:26.569927  263568 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:21:26.569990  263568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:26.579394  263568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:26.588657  263568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:26.598019  263568 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:21:26.607343  263568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:26.616850  263568 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:26.634587  263568 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:26.644446  263568 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:21:26.652632  263568 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:21:26.660828  263568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:21:26.753353  263568 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:21:26.868227  263568 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:21:26.868306  263568 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:21:26.872471  263568 start.go:564] Will wait 60s for crictl version
	I1101 09:21:26.872539  263568 ssh_runner.go:195] Run: which crictl
	I1101 09:21:26.876284  263568 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:21:26.904294  263568 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:21:26.904386  263568 ssh_runner.go:195] Run: crio --version
	I1101 09:21:26.934931  263568 ssh_runner.go:195] Run: crio --version
	I1101 09:21:26.967976  263568 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:21:26.969225  263568 cli_runner.go:164] Run: docker network inspect newest-cni-340756 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:21:26.987594  263568 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1101 09:21:26.991814  263568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:21:27.004546  263568 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1101 09:21:26.569457  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	W1101 09:21:29.068406  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	I1101 09:21:26.218002  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:26.218463  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:26.218516  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:26.218571  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:26.248555  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:26.248581  216020 cri.go:89] found id: ""
	I1101 09:21:26.248590  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:26.248653  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:26.252797  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:26.252878  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:26.283445  216020 cri.go:89] found id: ""
	I1101 09:21:26.283469  216020 logs.go:282] 0 containers: []
	W1101 09:21:26.283479  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:26.283486  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:26.283545  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:26.316521  216020 cri.go:89] found id: ""
	I1101 09:21:26.316551  216020 logs.go:282] 0 containers: []
	W1101 09:21:26.316562  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:26.316570  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:26.316633  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:26.346452  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:26.346479  216020 cri.go:89] found id: ""
	I1101 09:21:26.346486  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:26.346535  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:26.350481  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:26.350546  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:26.385585  216020 cri.go:89] found id: ""
	I1101 09:21:26.385616  216020 logs.go:282] 0 containers: []
	W1101 09:21:26.385626  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:26.385635  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:26.385690  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:26.416425  216020 cri.go:89] found id: "b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:26.416451  216020 cri.go:89] found id: "df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:26.416455  216020 cri.go:89] found id: ""
	I1101 09:21:26.416463  216020 logs.go:282] 2 containers: [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd]
	I1101 09:21:26.416519  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:26.421223  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:26.425693  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:26.425771  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:26.459495  216020 cri.go:89] found id: ""
	I1101 09:21:26.459525  216020 logs.go:282] 0 containers: []
	W1101 09:21:26.459535  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:26.459543  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:26.459606  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:26.498235  216020 cri.go:89] found id: ""
	I1101 09:21:26.498264  216020 logs.go:282] 0 containers: []
	W1101 09:21:26.498275  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:26.498292  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:26.498307  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:26.516258  216020 logs.go:123] Gathering logs for kube-controller-manager [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd] ...
	I1101 09:21:26.516305  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:26.547393  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:26.547419  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:26.605955  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:26.605993  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:26.712155  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:26.712196  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:26.777771  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:26.777797  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:26.777815  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:26.817033  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:26.817067  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:26.878664  216020 logs.go:123] Gathering logs for kube-controller-manager [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867] ...
	I1101 09:21:26.878701  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:26.909010  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:26.909045  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:29.443287  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:29.443773  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:29.443836  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:29.443912  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:29.474362  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:29.474391  216020 cri.go:89] found id: ""
	I1101 09:21:29.474403  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:29.474465  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:29.478946  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:29.479020  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:29.508009  216020 cri.go:89] found id: ""
	I1101 09:21:29.508034  216020 logs.go:282] 0 containers: []
	W1101 09:21:29.508046  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:29.508054  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:29.508106  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:29.538327  216020 cri.go:89] found id: ""
	I1101 09:21:29.538352  216020 logs.go:282] 0 containers: []
	W1101 09:21:29.538362  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:29.538369  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:29.538425  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:29.567717  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:29.567742  216020 cri.go:89] found id: ""
	I1101 09:21:29.567750  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:29.567817  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:29.572414  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:29.572483  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:29.605199  216020 cri.go:89] found id: ""
	I1101 09:21:29.605227  216020 logs.go:282] 0 containers: []
	W1101 09:21:29.605238  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:29.605244  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:29.605313  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:29.637360  216020 cri.go:89] found id: "b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:29.637387  216020 cri.go:89] found id: ""
	I1101 09:21:29.637397  216020 logs.go:282] 1 containers: [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867]
	I1101 09:21:29.637456  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:29.642045  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:29.642113  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:29.673568  216020 cri.go:89] found id: ""
	I1101 09:21:29.673593  216020 logs.go:282] 0 containers: []
	W1101 09:21:29.673604  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:29.673612  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:29.673670  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:29.706496  216020 cri.go:89] found id: ""
	I1101 09:21:29.706522  216020 logs.go:282] 0 containers: []
	W1101 09:21:29.706529  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:29.706538  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:29.706555  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:29.740598  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:29.740627  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:29.809231  216020 logs.go:123] Gathering logs for kube-controller-manager [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867] ...
	I1101 09:21:29.809269  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:26.220643  262357 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 09:21:26.299212  262357 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:21:27.005755  263568 kubeadm.go:884] updating cluster {Name:newest-cni-340756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-340756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:21:27.005934  263568 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:21:27.006015  263568 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:21:27.038856  263568 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:21:27.038908  263568 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:21:27.038962  263568 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:21:27.067077  263568 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:21:27.067097  263568 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:21:27.067104  263568 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1101 09:21:27.067207  263568 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-340756 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-340756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:21:27.067302  263568 ssh_runner.go:195] Run: crio config
	I1101 09:21:27.115846  263568 cni.go:84] Creating CNI manager for ""
	I1101 09:21:27.115889  263568 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:21:27.115917  263568 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 09:21:27.115941  263568 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-340756 NodeName:newest-cni-340756 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:21:27.116109  263568 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-340756"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:21:27.116184  263568 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:21:27.124904  263568 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:21:27.124986  263568 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:21:27.133615  263568 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 09:21:27.148747  263568 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:21:27.168585  263568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1101 09:21:27.183947  263568 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:21:27.188114  263568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:21:27.200232  263568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:21:27.286309  263568 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:21:27.314555  263568 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756 for IP: 192.168.94.2
	I1101 09:21:27.314579  263568 certs.go:195] generating shared ca certs ...
	I1101 09:21:27.314600  263568 certs.go:227] acquiring lock for ca certs: {Name:mkfdee6a84670347521013ebeef165551380cb9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:27.314763  263568 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key
	I1101 09:21:27.314804  263568 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key
	I1101 09:21:27.314813  263568 certs.go:257] generating profile certs ...
	I1101 09:21:27.314880  263568 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/client.key
	I1101 09:21:27.314901  263568 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/client.crt with IP's: []
	I1101 09:21:27.692177  263568 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/client.crt ...
	I1101 09:21:27.692209  263568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/client.crt: {Name:mkbb4ae05d45ea00cbc1fad0c09f2509b5385f8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:27.692409  263568 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/client.key ...
	I1101 09:21:27.692445  263568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/client.key: {Name:mk586a11d387791617c3fc6e5017b434d57db019 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:27.692549  263568 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.key.b81bb48a
	I1101 09:21:27.692564  263568 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.crt.b81bb48a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1101 09:21:27.755373  263568 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.crt.b81bb48a ...
	I1101 09:21:27.755407  263568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.crt.b81bb48a: {Name:mk6e0c5df36bbbcbb489321948b8dc7e48e0d551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:27.755593  263568 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.key.b81bb48a ...
	I1101 09:21:27.755606  263568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.key.b81bb48a: {Name:mkfbcb52bf51885ea6244fdb7f88dfae8b653a3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:27.755676  263568 certs.go:382] copying /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.crt.b81bb48a -> /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.crt
	I1101 09:21:27.755761  263568 certs.go:386] copying /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.key.b81bb48a -> /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.key
	I1101 09:21:27.755853  263568 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/proxy-client.key
	I1101 09:21:27.755884  263568 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/proxy-client.crt with IP's: []
	I1101 09:21:27.983684  263568 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/proxy-client.crt ...
	I1101 09:21:27.983715  263568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/proxy-client.crt: {Name:mk2a01f0c56ea811a822018dd77d41193ca99202 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:27.983905  263568 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/proxy-client.key ...
	I1101 09:21:27.983920  263568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/proxy-client.key: {Name:mk7a924dbb62fa5d4646b90d42dbf16e12de6201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:27.984087  263568 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem (1338 bytes)
	W1101 09:21:27.984124  263568 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414_empty.pem, impossibly tiny 0 bytes
	I1101 09:21:27.984139  263568 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:21:27.984170  263568 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:21:27.984203  263568 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:21:27.984232  263568 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem (1675 bytes)
	I1101 09:21:27.984273  263568 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:21:27.984823  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:21:28.003673  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:21:28.022550  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:21:28.041261  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:21:28.060164  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 09:21:28.079743  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:21:28.098558  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:21:28.117407  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:21:28.136073  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:21:28.157077  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem --> /usr/share/ca-certificates/9414.pem (1338 bytes)
	I1101 09:21:28.177850  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /usr/share/ca-certificates/94142.pem (1708 bytes)
	I1101 09:21:28.198397  263568 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:21:28.212819  263568 ssh_runner.go:195] Run: openssl version
	I1101 09:21:28.219792  263568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:21:28.228932  263568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:21:28.233066  263568 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:21:28.233136  263568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:21:28.268458  263568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:21:28.277690  263568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9414.pem && ln -fs /usr/share/ca-certificates/9414.pem /etc/ssl/certs/9414.pem"
	I1101 09:21:28.286876  263568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9414.pem
	I1101 09:21:28.291269  263568 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:35 /usr/share/ca-certificates/9414.pem
	I1101 09:21:28.291329  263568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9414.pem
	I1101 09:21:28.325538  263568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9414.pem /etc/ssl/certs/51391683.0"
	I1101 09:21:28.334805  263568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94142.pem && ln -fs /usr/share/ca-certificates/94142.pem /etc/ssl/certs/94142.pem"
	I1101 09:21:28.343980  263568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94142.pem
	I1101 09:21:28.348001  263568 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:35 /usr/share/ca-certificates/94142.pem
	I1101 09:21:28.348063  263568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94142.pem
	I1101 09:21:28.382381  263568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94142.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:21:28.391727  263568 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:21:28.395966  263568 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:21:28.396022  263568 kubeadm.go:401] StartCluster: {Name:newest-cni-340756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-340756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:21:28.396112  263568 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:21:28.396181  263568 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:21:28.427389  263568 cri.go:89] found id: ""
	I1101 09:21:28.427460  263568 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:21:28.436396  263568 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:21:28.445034  263568 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:21:28.445096  263568 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:21:28.453536  263568 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:21:28.453555  263568 kubeadm.go:158] found existing configuration files:
	
	I1101 09:21:28.453599  263568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:21:28.462257  263568 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:21:28.462315  263568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:21:28.470478  263568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:21:28.479045  263568 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:21:28.479118  263568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:21:28.487232  263568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:21:28.495250  263568 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:21:28.495317  263568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:21:28.503323  263568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:21:28.513024  263568 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:21:28.513166  263568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:21:28.522016  263568 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:21:28.587302  263568 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 09:21:28.653705  263568 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1101 09:21:31.069175  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	W1101 09:21:33.070999  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	I1101 09:21:29.839290  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:29.839318  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:29.895909  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:29.895943  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:29.929482  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:29.929510  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:30.037822  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:30.037874  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:30.054657  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:30.054690  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:30.122222  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:32.622956  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:32.623463  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:32.623527  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:32.623583  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:32.663084  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:32.663130  216020 cri.go:89] found id: ""
	I1101 09:21:32.663146  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:32.663225  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:32.668663  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:32.668752  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:32.705016  216020 cri.go:89] found id: ""
	I1101 09:21:32.705050  216020 logs.go:282] 0 containers: []
	W1101 09:21:32.705062  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:32.705069  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:32.705128  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:32.742963  216020 cri.go:89] found id: ""
	I1101 09:21:32.742994  216020 logs.go:282] 0 containers: []
	W1101 09:21:32.743004  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:32.743012  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:32.743072  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:32.776827  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:32.776856  216020 cri.go:89] found id: ""
	I1101 09:21:32.776894  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:32.776950  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:32.782337  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:32.782413  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:32.821619  216020 cri.go:89] found id: ""
	I1101 09:21:32.821644  216020 logs.go:282] 0 containers: []
	W1101 09:21:32.821654  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:32.821661  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:32.821725  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:32.856480  216020 cri.go:89] found id: "b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:32.856504  216020 cri.go:89] found id: ""
	I1101 09:21:32.856514  216020 logs.go:282] 1 containers: [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867]
	I1101 09:21:32.856569  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:32.861729  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:32.861808  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:32.895569  216020 cri.go:89] found id: ""
	I1101 09:21:32.895601  216020 logs.go:282] 0 containers: []
	W1101 09:21:32.895612  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:32.895621  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:32.895703  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:32.932306  216020 cri.go:89] found id: ""
	I1101 09:21:32.932331  216020 logs.go:282] 0 containers: []
	W1101 09:21:32.932342  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:32.932353  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:32.932368  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:32.997846  216020 logs.go:123] Gathering logs for kube-controller-manager [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867] ...
	I1101 09:21:32.997892  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:33.036043  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:33.036086  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:33.123414  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:33.123453  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:33.169800  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:33.169831  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:33.295230  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:33.295267  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:33.313406  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:33.313451  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:33.393782  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:33.393809  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:33.393825  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:37.173559  262357 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:21:37.173618  262357 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:21:37.173719  262357 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:21:37.173805  262357 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 09:21:37.173907  262357 kubeadm.go:319] OS: Linux
	I1101 09:21:37.173993  262357 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:21:37.174063  262357 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:21:37.174128  262357 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:21:37.174195  262357 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:21:37.174262  262357 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:21:37.174353  262357 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:21:37.174424  262357 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:21:37.174488  262357 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 09:21:37.174628  262357 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:21:37.174731  262357 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:21:37.174826  262357 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:21:37.174991  262357 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:21:37.176483  262357 out.go:252]   - Generating certificates and keys ...
	I1101 09:21:37.176614  262357 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:21:37.176733  262357 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:21:37.176843  262357 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:21:37.176981  262357 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:21:37.177075  262357 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:21:37.177161  262357 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:21:37.177231  262357 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:21:37.177406  262357 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-648641 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1101 09:21:37.177469  262357 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:21:37.177621  262357 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-648641 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1101 09:21:37.177724  262357 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:21:37.177823  262357 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:21:37.177897  262357 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:21:37.177981  262357 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:21:37.178058  262357 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:21:37.178133  262357 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:21:37.178209  262357 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:21:37.178317  262357 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:21:37.178416  262357 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:21:37.178535  262357 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:21:37.178677  262357 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:21:37.181296  262357 out.go:252]   - Booting up control plane ...
	I1101 09:21:37.181439  262357 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:21:37.181577  262357 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:21:37.181697  262357 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:21:37.181814  262357 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:21:37.181979  262357 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:21:37.182130  262357 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:21:37.182217  262357 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:21:37.182256  262357 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:21:37.182373  262357 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:21:37.182503  262357 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:21:37.182581  262357 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.981198ms
	I1101 09:21:37.182667  262357 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:21:37.182801  262357 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1101 09:21:37.182953  262357 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:21:37.183074  262357 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:21:37.183193  262357 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.614305294s
	I1101 09:21:37.183252  262357 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.436782156s
	I1101 09:21:37.183327  262357 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.503947493s
	I1101 09:21:37.183453  262357 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:21:37.183586  262357 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:21:37.183656  262357 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:21:37.183966  262357 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-648641 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:21:37.184062  262357 kubeadm.go:319] [bootstrap-token] Using token: dttn2w.a8ocz5il4ubw84p7
	I1101 09:21:37.185353  262357 out.go:252]   - Configuring RBAC rules ...
	I1101 09:21:37.185480  262357 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:21:37.185609  262357 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:21:37.185808  262357 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:21:37.185999  262357 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:21:37.186170  262357 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:21:37.186311  262357 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:21:37.186468  262357 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:21:37.186544  262357 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:21:37.186626  262357 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:21:37.186642  262357 kubeadm.go:319] 
	I1101 09:21:37.186717  262357 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:21:37.186727  262357 kubeadm.go:319] 
	I1101 09:21:37.186809  262357 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:21:37.186818  262357 kubeadm.go:319] 
	I1101 09:21:37.186840  262357 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:21:37.186951  262357 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:21:37.187017  262357 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:21:37.187028  262357 kubeadm.go:319] 
	I1101 09:21:37.187101  262357 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:21:37.187110  262357 kubeadm.go:319] 
	I1101 09:21:37.187180  262357 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:21:37.187188  262357 kubeadm.go:319] 
	I1101 09:21:37.187264  262357 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:21:37.187367  262357 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:21:37.187471  262357 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:21:37.187488  262357 kubeadm.go:319] 
	I1101 09:21:37.187599  262357 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:21:37.187723  262357 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:21:37.187741  262357 kubeadm.go:319] 
	I1101 09:21:37.187836  262357 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token dttn2w.a8ocz5il4ubw84p7 \
	I1101 09:21:37.187993  262357 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 \
	I1101 09:21:37.188042  262357 kubeadm.go:319] 	--control-plane 
	I1101 09:21:37.188083  262357 kubeadm.go:319] 
	I1101 09:21:37.188203  262357 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:21:37.188214  262357 kubeadm.go:319] 
	I1101 09:21:37.188320  262357 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token dttn2w.a8ocz5il4ubw84p7 \
	I1101 09:21:37.188479  262357 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 
	I1101 09:21:37.188490  262357 cni.go:84] Creating CNI manager for ""
	I1101 09:21:37.188499  262357 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:21:37.190119  262357 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:21:39.072652  263568 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:21:39.072740  263568 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:21:39.072909  263568 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:21:39.073018  263568 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 09:21:39.073091  263568 kubeadm.go:319] OS: Linux
	I1101 09:21:39.073179  263568 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:21:39.073252  263568 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:21:39.073324  263568 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:21:39.073391  263568 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:21:39.073461  263568 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:21:39.073536  263568 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:21:39.073614  263568 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:21:39.073685  263568 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 09:21:39.073792  263568 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:21:39.073954  263568 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:21:39.074092  263568 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:21:39.074196  263568 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:21:39.076838  263568 out.go:252]   - Generating certificates and keys ...
	I1101 09:21:39.076956  263568 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:21:39.077068  263568 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:21:39.077160  263568 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:21:39.077218  263568 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:21:39.077301  263568 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:21:39.077350  263568 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:21:39.077400  263568 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:21:39.077516  263568 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-340756] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1101 09:21:39.077563  263568 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:21:39.077727  263568 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-340756] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1101 09:21:39.077833  263568 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:21:39.077926  263568 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:21:39.078002  263568 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:21:39.078060  263568 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:21:39.078104  263568 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:21:39.078164  263568 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:21:39.078242  263568 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:21:39.078357  263568 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:21:39.078407  263568 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:21:39.078476  263568 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:21:39.078534  263568 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:21:39.079883  263568 out.go:252]   - Booting up control plane ...
	I1101 09:21:39.079979  263568 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:21:39.080072  263568 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:21:39.080158  263568 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:21:39.080248  263568 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:21:39.080321  263568 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:21:39.080410  263568 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:21:39.080487  263568 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:21:39.080520  263568 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:21:39.080621  263568 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:21:39.080708  263568 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:21:39.080762  263568 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.756426ms
	I1101 09:21:39.080849  263568 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:21:39.080954  263568 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1101 09:21:39.081027  263568 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:21:39.081139  263568 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:21:39.081209  263568 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.622492073s
	I1101 09:21:39.081276  263568 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.035370604s
	I1101 09:21:39.081342  263568 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001446602s
	I1101 09:21:39.081478  263568 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:21:39.081608  263568 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:21:39.081696  263568 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:21:39.081968  263568 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-340756 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:21:39.082027  263568 kubeadm.go:319] [bootstrap-token] Using token: tyli2s.xsn3fo4xtejuilp0
	I1101 09:21:39.083715  263568 out.go:252]   - Configuring RBAC rules ...
	I1101 09:21:39.083857  263568 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:21:39.084000  263568 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:21:39.084180  263568 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:21:39.084305  263568 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:21:39.084396  263568 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:21:39.084465  263568 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:21:39.084563  263568 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:21:39.084630  263568 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:21:39.084704  263568 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:21:39.084713  263568 kubeadm.go:319] 
	I1101 09:21:39.084803  263568 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:21:39.084812  263568 kubeadm.go:319] 
	I1101 09:21:39.084946  263568 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:21:39.084956  263568 kubeadm.go:319] 
	I1101 09:21:39.084996  263568 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:21:39.085106  263568 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:21:39.085202  263568 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:21:39.085211  263568 kubeadm.go:319] 
	I1101 09:21:39.085305  263568 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:21:39.085324  263568 kubeadm.go:319] 
	I1101 09:21:39.085398  263568 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:21:39.085416  263568 kubeadm.go:319] 
	I1101 09:21:39.085474  263568 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:21:39.085544  263568 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:21:39.085631  263568 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:21:39.085639  263568 kubeadm.go:319] 
	I1101 09:21:39.085752  263568 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:21:39.085858  263568 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:21:39.085878  263568 kubeadm.go:319] 
	I1101 09:21:39.085987  263568 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tyli2s.xsn3fo4xtejuilp0 \
	I1101 09:21:39.086079  263568 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 \
	I1101 09:21:39.086098  263568 kubeadm.go:319] 	--control-plane 
	I1101 09:21:39.086103  263568 kubeadm.go:319] 
	I1101 09:21:39.086168  263568 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:21:39.086174  263568 kubeadm.go:319] 
	I1101 09:21:39.086241  263568 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tyli2s.xsn3fo4xtejuilp0 \
	I1101 09:21:39.086362  263568 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 
	I1101 09:21:39.086377  263568 cni.go:84] Creating CNI manager for ""
	I1101 09:21:39.086383  263568 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:21:39.088895  263568 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1101 09:21:35.571965  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	W1101 09:21:38.071647  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	I1101 09:21:35.933884  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:35.934303  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:35.934356  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:35.934407  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:35.962382  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:35.962409  216020 cri.go:89] found id: ""
	I1101 09:21:35.962432  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:35.962493  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:35.967486  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:35.967566  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:35.995175  216020 cri.go:89] found id: ""
	I1101 09:21:35.995205  216020 logs.go:282] 0 containers: []
	W1101 09:21:35.995215  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:35.995223  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:35.995277  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:36.023392  216020 cri.go:89] found id: ""
	I1101 09:21:36.023425  216020 logs.go:282] 0 containers: []
	W1101 09:21:36.023435  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:36.023442  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:36.023495  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:36.051769  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:36.051791  216020 cri.go:89] found id: ""
	I1101 09:21:36.051800  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:36.051879  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:36.055950  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:36.056021  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:36.084309  216020 cri.go:89] found id: ""
	I1101 09:21:36.084329  216020 logs.go:282] 0 containers: []
	W1101 09:21:36.084337  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:36.084343  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:36.084394  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:36.112936  216020 cri.go:89] found id: "b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:36.112967  216020 cri.go:89] found id: ""
	I1101 09:21:36.112978  216020 logs.go:282] 1 containers: [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867]
	I1101 09:21:36.113024  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:36.117134  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:36.117193  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:36.144389  216020 cri.go:89] found id: ""
	I1101 09:21:36.144419  216020 logs.go:282] 0 containers: []
	W1101 09:21:36.144432  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:36.144447  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:36.144507  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:36.174333  216020 cri.go:89] found id: ""
	I1101 09:21:36.174355  216020 logs.go:282] 0 containers: []
	W1101 09:21:36.174363  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:36.174372  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:36.174383  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:36.190991  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:36.191022  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:36.260051  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:36.260074  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:36.260090  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:36.298788  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:36.298821  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:36.362575  216020 logs.go:123] Gathering logs for kube-controller-manager [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867] ...
	I1101 09:21:36.362621  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:36.399166  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:36.399193  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:36.468028  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:36.468069  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:36.501911  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:36.501939  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:39.109967  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:39.110414  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:39.110469  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:39.110522  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:39.143264  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:39.143283  216020 cri.go:89] found id: ""
	I1101 09:21:39.143290  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:39.143342  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:39.147691  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:39.147771  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:39.178502  216020 cri.go:89] found id: ""
	I1101 09:21:39.178528  216020 logs.go:282] 0 containers: []
	W1101 09:21:39.178538  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:39.178545  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:39.178607  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:39.211410  216020 cri.go:89] found id: ""
	I1101 09:21:39.211440  216020 logs.go:282] 0 containers: []
	W1101 09:21:39.211450  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:39.211459  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:39.211521  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:39.246616  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:39.246645  216020 cri.go:89] found id: ""
	I1101 09:21:39.246655  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:39.246724  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:39.251059  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:39.251127  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:39.284617  216020 cri.go:89] found id: ""
	I1101 09:21:39.284652  216020 logs.go:282] 0 containers: []
	W1101 09:21:39.284664  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:39.284672  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:39.284740  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:39.319828  216020 cri.go:89] found id: "b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:39.319852  216020 cri.go:89] found id: ""
	I1101 09:21:39.319892  216020 logs.go:282] 1 containers: [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867]
	I1101 09:21:39.319956  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:39.325472  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:39.325548  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:39.368025  216020 cri.go:89] found id: ""
	I1101 09:21:39.368054  216020 logs.go:282] 0 containers: []
	W1101 09:21:39.368065  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:39.368074  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:39.368133  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:39.405733  216020 cri.go:89] found id: ""
	I1101 09:21:39.405771  216020 logs.go:282] 0 containers: []
	W1101 09:21:39.405783  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:39.405793  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:39.405814  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:39.465634  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:39.465668  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:39.525559  216020 logs.go:123] Gathering logs for kube-controller-manager [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867] ...
	I1101 09:21:39.525593  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:39.558313  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:39.558350  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:39.625965  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:39.626006  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:39.658059  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:39.658092  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:39.759174  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:39.759215  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:39.777214  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:39.777243  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1101 09:21:37.191433  262357 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:21:37.197163  262357 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:21:37.197191  262357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:21:37.214677  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:21:37.444545  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:37.444657  262357 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:21:37.445053  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-648641 minikube.k8s.io/updated_at=2025_11_01T09_21_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=default-k8s-diff-port-648641 minikube.k8s.io/primary=true
	I1101 09:21:37.529149  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:37.529164  262357 ops.go:34] apiserver oom_adj: -16
	I1101 09:21:38.029350  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:38.529475  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:39.029339  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:39.530028  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:40.029249  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:40.530053  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:41.029582  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:39.091032  263568 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:21:39.096443  263568 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:21:39.096465  263568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:21:39.111385  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:21:39.374555  263568 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:21:39.374668  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:39.374704  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-340756 minikube.k8s.io/updated_at=2025_11_01T09_21_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=newest-cni-340756 minikube.k8s.io/primary=true
	I1101 09:21:39.494159  263568 ops.go:34] apiserver oom_adj: -16
	I1101 09:21:39.494196  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:39.994658  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:40.494262  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:40.994330  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:41.530066  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:41.604074  262357 kubeadm.go:1114] duration metric: took 4.159600393s to wait for elevateKubeSystemPrivileges
	I1101 09:21:41.604116  262357 kubeadm.go:403] duration metric: took 15.575329154s to StartCluster
	I1101 09:21:41.604136  262357 settings.go:142] acquiring lock: {Name:mkb1ba7d0d4bb15f3f0746ce486d72703f901580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:41.604215  262357 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:21:41.605814  262357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:41.606175  262357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:21:41.606192  262357 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:21:41.606275  262357 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:21:41.606366  262357 config.go:182] Loaded profile config "default-k8s-diff-port-648641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:41.606402  262357 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-648641"
	I1101 09:21:41.606422  262357 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-648641"
	I1101 09:21:41.606459  262357 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-648641"
	I1101 09:21:41.606432  262357 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-648641"
	I1101 09:21:41.606586  262357 host.go:66] Checking if "default-k8s-diff-port-648641" exists ...
	I1101 09:21:41.606969  262357 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-648641 --format={{.State.Status}}
	I1101 09:21:41.607153  262357 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-648641 --format={{.State.Status}}
	I1101 09:21:41.607935  262357 out.go:179] * Verifying Kubernetes components...
	I1101 09:21:41.609213  262357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:21:41.636030  262357 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:21:41.636379  262357 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-648641"
	I1101 09:21:41.636432  262357 host.go:66] Checking if "default-k8s-diff-port-648641" exists ...
	I1101 09:21:41.636825  262357 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-648641 --format={{.State.Status}}
	I1101 09:21:41.637464  262357 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:21:41.637489  262357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:21:41.637568  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:41.674180  262357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/default-k8s-diff-port-648641/id_rsa Username:docker}
	I1101 09:21:41.679112  262357 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:21:41.679137  262357 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:21:41.679211  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:41.703106  262357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/default-k8s-diff-port-648641/id_rsa Username:docker}
	I1101 09:21:41.742430  262357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:21:41.780409  262357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:21:41.810267  262357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:21:41.844165  262357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:21:41.949468  262357 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1101 09:21:41.950803  262357 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-648641" to be "Ready" ...
	I1101 09:21:42.179770  262357 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1101 09:21:40.572956  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	I1101 09:21:42.069428  256247 pod_ready.go:94] pod "coredns-66bc5c9577-wwvth" is "Ready"
	I1101 09:21:42.069460  256247 pod_ready.go:86] duration metric: took 31.506833683s for pod "coredns-66bc5c9577-wwvth" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:42.073169  256247 pod_ready.go:83] waiting for pod "etcd-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:42.079254  256247 pod_ready.go:94] pod "etcd-embed-certs-236314" is "Ready"
	I1101 09:21:42.079286  256247 pod_ready.go:86] duration metric: took 6.089876ms for pod "etcd-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:42.082501  256247 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:42.087688  256247 pod_ready.go:94] pod "kube-apiserver-embed-certs-236314" is "Ready"
	I1101 09:21:42.087717  256247 pod_ready.go:86] duration metric: took 5.189118ms for pod "kube-apiserver-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:42.091305  256247 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:42.266702  256247 pod_ready.go:94] pod "kube-controller-manager-embed-certs-236314" is "Ready"
	I1101 09:21:42.266737  256247 pod_ready.go:86] duration metric: took 175.350883ms for pod "kube-controller-manager-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:42.467790  256247 pod_ready.go:83] waiting for pod "kube-proxy-55ft8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:42.866637  256247 pod_ready.go:94] pod "kube-proxy-55ft8" is "Ready"
	I1101 09:21:42.866676  256247 pod_ready.go:86] duration metric: took 398.843584ms for pod "kube-proxy-55ft8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:43.067021  256247 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:43.466749  256247 pod_ready.go:94] pod "kube-scheduler-embed-certs-236314" is "Ready"
	I1101 09:21:43.466783  256247 pod_ready.go:86] duration metric: took 399.735748ms for pod "kube-scheduler-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:43.466804  256247 pod_ready.go:40] duration metric: took 32.908124185s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:21:43.514845  256247 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:21:43.516754  256247 out.go:179] * Done! kubectl is now configured to use "embed-certs-236314" cluster and "default" namespace by default
	I1101 09:21:41.495296  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:41.994701  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:42.495064  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:42.994567  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:43.495241  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:43.575951  263568 kubeadm.go:1114] duration metric: took 4.2013384s to wait for elevateKubeSystemPrivileges
	I1101 09:21:43.575987  263568 kubeadm.go:403] duration metric: took 15.179967762s to StartCluster
	I1101 09:21:43.576007  263568 settings.go:142] acquiring lock: {Name:mkb1ba7d0d4bb15f3f0746ce486d72703f901580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:43.576093  263568 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:21:43.578613  263568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:43.578959  263568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:21:43.578971  263568 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:21:43.579034  263568 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:21:43.579211  263568 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-340756"
	I1101 09:21:43.579236  263568 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-340756"
	I1101 09:21:43.579254  263568 config.go:182] Loaded profile config "newest-cni-340756": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:43.579274  263568 host.go:66] Checking if "newest-cni-340756" exists ...
	I1101 09:21:43.579267  263568 addons.go:70] Setting default-storageclass=true in profile "newest-cni-340756"
	I1101 09:21:43.579381  263568 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-340756"
	I1101 09:21:43.579722  263568 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Status}}
	I1101 09:21:43.579808  263568 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Status}}
	I1101 09:21:43.581304  263568 out.go:179] * Verifying Kubernetes components...
	I1101 09:21:43.583140  263568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:21:43.610121  263568 addons.go:239] Setting addon default-storageclass=true in "newest-cni-340756"
	I1101 09:21:43.610168  263568 host.go:66] Checking if "newest-cni-340756" exists ...
	I1101 09:21:43.610646  263568 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Status}}
	I1101 09:21:43.611306  263568 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:21:43.613411  263568 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:21:43.613437  263568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:21:43.613499  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:43.657272  263568 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:21:43.657341  263568 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:21:43.657427  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:43.659518  263568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:21:43.684025  263568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:21:43.699595  263568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:21:43.758185  263568 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:21:43.784453  263568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:21:43.805177  263568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:21:43.909273  263568 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1101 09:21:43.912845  263568 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:21:43.913009  263568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:21:44.129441  263568 api_server.go:72] duration metric: took 550.43073ms to wait for apiserver process to appear ...
	I1101 09:21:44.129468  263568 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:21:44.129489  263568 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 09:21:44.134537  263568 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1101 09:21:44.135389  263568 api_server.go:141] control plane version: v1.34.1
	I1101 09:21:44.135412  263568 api_server.go:131] duration metric: took 5.938114ms to wait for apiserver health ...
	I1101 09:21:44.135421  263568 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:21:44.136055  263568 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:21:44.136942  263568 addons.go:515] duration metric: took 557.903817ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:21:44.138990  263568 system_pods.go:59] 8 kube-system pods found
	I1101 09:21:44.139016  263568 system_pods.go:61] "coredns-66bc5c9577-tmnp2" [3dc7a625-aa33-404e-b8e1-4abff976bac9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:21:44.139023  263568 system_pods.go:61] "etcd-newest-cni-340756" [5ba122dc-81df-44c9-b993-82d2381dd60c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:21:44.139032  263568 system_pods.go:61] "kindnet-gjnst" [9c4e4a33-eff1-47ec-94bc-7f9196c547ff] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:21:44.139038  263568 system_pods.go:61] "kube-apiserver-newest-cni-340756" [fefc943a-a3b3-4069-9eed-d6a6815d3846] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:21:44.139046  263568 system_pods.go:61] "kube-controller-manager-newest-cni-340756" [f6823fe4-7c7e-4b04-8fbd-f52058100d5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:21:44.139051  263568 system_pods.go:61] "kube-proxy-wp2h9" [e6a908ac-4dfb-4f1c-8059-79695562a817] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:21:44.139056  263568 system_pods.go:61] "kube-scheduler-newest-cni-340756" [4673d267-6290-4f99-af1c-173b383aa4ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:21:44.139061  263568 system_pods.go:61] "storage-provisioner" [0e7d7956-489a-4005-ba49-4975f35bfc8a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:21:44.139068  263568 system_pods.go:74] duration metric: took 3.64168ms to wait for pod list to return data ...
	I1101 09:21:44.139078  263568 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:21:44.141225  263568 default_sa.go:45] found service account: "default"
	I1101 09:21:44.141244  263568 default_sa.go:55] duration metric: took 2.160133ms for default service account to be created ...
	I1101 09:21:44.141255  263568 kubeadm.go:587] duration metric: took 562.252373ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:21:44.141271  263568 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:21:44.143574  263568 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:21:44.143610  263568 node_conditions.go:123] node cpu capacity is 8
	I1101 09:21:44.143627  263568 node_conditions.go:105] duration metric: took 2.351387ms to run NodePressure ...
	I1101 09:21:44.143646  263568 start.go:242] waiting for startup goroutines ...
	I1101 09:21:44.414584  263568 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-340756" context rescaled to 1 replicas
	I1101 09:21:44.414622  263568 start.go:247] waiting for cluster config update ...
	I1101 09:21:44.414636  263568 start.go:256] writing updated cluster config ...
	I1101 09:21:44.415030  263568 ssh_runner.go:195] Run: rm -f paused
	I1101 09:21:44.473212  263568 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:21:44.475648  263568 out.go:179] * Done! kubectl is now configured to use "newest-cni-340756" cluster and "default" namespace by default
	W1101 09:21:39.840571  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:42.340952  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:42.341425  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:42.341493  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:42.341555  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:42.373344  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:42.373374  216020 cri.go:89] found id: ""
	I1101 09:21:42.373384  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:42.373448  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:42.377993  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:42.378055  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:42.407269  216020 cri.go:89] found id: ""
	I1101 09:21:42.407298  216020 logs.go:282] 0 containers: []
	W1101 09:21:42.407310  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:42.407318  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:42.407378  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:42.437089  216020 cri.go:89] found id: ""
	I1101 09:21:42.437118  216020 logs.go:282] 0 containers: []
	W1101 09:21:42.437129  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:42.437138  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:42.437191  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:42.471604  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:42.471635  216020 cri.go:89] found id: ""
	I1101 09:21:42.471644  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:42.471759  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:42.477331  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:42.477420  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:42.511388  216020 cri.go:89] found id: ""
	I1101 09:21:42.511416  216020 logs.go:282] 0 containers: []
	W1101 09:21:42.511427  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:42.511442  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:42.511500  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:42.544140  216020 cri.go:89] found id: "b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:42.544166  216020 cri.go:89] found id: ""
	I1101 09:21:42.544176  216020 logs.go:282] 1 containers: [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867]
	I1101 09:21:42.544242  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:42.549430  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:42.549508  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:42.584483  216020 cri.go:89] found id: ""
	I1101 09:21:42.584511  216020 logs.go:282] 0 containers: []
	W1101 09:21:42.584521  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:42.584529  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:42.584583  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:42.624167  216020 cri.go:89] found id: ""
	I1101 09:21:42.624193  216020 logs.go:282] 0 containers: []
	W1101 09:21:42.624203  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:42.624222  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:42.624239  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:42.642149  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:42.642183  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:42.709435  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:42.709460  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:42.709478  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:42.744665  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:42.744704  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:42.799594  216020 logs.go:123] Gathering logs for kube-controller-manager [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867] ...
	I1101 09:21:42.799630  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:42.829152  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:42.829179  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:42.892517  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:42.892550  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:42.926545  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:42.926570  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	
	==> CRI-O <==
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.165257313Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.166012229Z" level=info msg="Ran pod sandbox 6ab50c4bb633c92e534b8b564fcdc240a1623eb8d331489e63c8bce441b60f7c with infra container: kube-system/kube-proxy-wp2h9/POD" id=6bd15d18-5419-4968-b65f-5084c2240421 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.167478385Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=21e9ed64-b6ba-4b89-b064-304f6a35fa12 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.16845612Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d3e364c7-846c-41bf-8ebe-6dee24745a78 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.171631105Z" level=info msg="Running pod sandbox: kube-system/kindnet-gjnst/POD" id=17bd24b4-b97a-478e-b1f7-af27dfd1bf34 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.171723001Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.172517232Z" level=info msg="Creating container: kube-system/kube-proxy-wp2h9/kube-proxy" id=a837d34f-80af-4bc1-9311-71e550417c78 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.172643286Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.17421654Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=17bd24b4-b97a-478e-b1f7-af27dfd1bf34 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.176300006Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.177235673Z" level=info msg="Ran pod sandbox 7167d752d7cfdd6ac4e10d5d8e007e21e4916eb3f41fe16656eecaea73f7ef17 with infra container: kube-system/kindnet-gjnst/POD" id=17bd24b4-b97a-478e-b1f7-af27dfd1bf34 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.177378129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.178022217Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.178273018Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8a2a5b35-ced6-48e4-8fa0-2da06fb04853 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.179223944Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3a03586c-6496-4685-9144-d41f64bd74ba name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.183439531Z" level=info msg="Creating container: kube-system/kindnet-gjnst/kindnet-cni" id=a7676090-b8e2-455f-a9c1-c8a14a783965 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.183549882Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.187543433Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.188099366Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.210368983Z" level=info msg="Created container faec9a1b6bc70e57d63655ba0b358ca80c6c7a3452235369e73c522520c3824b: kube-system/kindnet-gjnst/kindnet-cni" id=a7676090-b8e2-455f-a9c1-c8a14a783965 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.211166434Z" level=info msg="Starting container: faec9a1b6bc70e57d63655ba0b358ca80c6c7a3452235369e73c522520c3824b" id=abe475d1-1fc7-457c-810b-d90ad0ff56bc name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.211743834Z" level=info msg="Created container ed1acbdf5a853043c16fa3c710ec64c00afbe8c1f11b261f337fe74a67ff0b97: kube-system/kube-proxy-wp2h9/kube-proxy" id=a837d34f-80af-4bc1-9311-71e550417c78 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.212229355Z" level=info msg="Starting container: ed1acbdf5a853043c16fa3c710ec64c00afbe8c1f11b261f337fe74a67ff0b97" id=723bdf10-e9a4-4f69-9589-b3c17c85551a name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.213462815Z" level=info msg="Started container" PID=1603 containerID=faec9a1b6bc70e57d63655ba0b358ca80c6c7a3452235369e73c522520c3824b description=kube-system/kindnet-gjnst/kindnet-cni id=abe475d1-1fc7-457c-810b-d90ad0ff56bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=7167d752d7cfdd6ac4e10d5d8e007e21e4916eb3f41fe16656eecaea73f7ef17
	Nov 01 09:21:44 newest-cni-340756 crio[777]: time="2025-11-01T09:21:44.215161076Z" level=info msg="Started container" PID=1602 containerID=ed1acbdf5a853043c16fa3c710ec64c00afbe8c1f11b261f337fe74a67ff0b97 description=kube-system/kube-proxy-wp2h9/kube-proxy id=723bdf10-e9a4-4f69-9589-b3c17c85551a name=/runtime.v1.RuntimeService/StartContainer sandboxID=6ab50c4bb633c92e534b8b564fcdc240a1623eb8d331489e63c8bce441b60f7c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	faec9a1b6bc70       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   7167d752d7cfd       kindnet-gjnst                               kube-system
	ed1acbdf5a853       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   6ab50c4bb633c       kube-proxy-wp2h9                            kube-system
	935ac79bf2b1f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   11 seconds ago      Running             kube-apiserver            0                   a79e14f4112db       kube-apiserver-newest-cni-340756            kube-system
	2a45db34a9fe8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   11 seconds ago      Running             etcd                      0                   5951b6541772c       etcd-newest-cni-340756                      kube-system
	2d32f2c01f9a4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   11 seconds ago      Running             kube-controller-manager   0                   cdd1d2a3fbd59       kube-controller-manager-newest-cni-340756   kube-system
	5aaf08865bfb1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   11 seconds ago      Running             kube-scheduler            0                   66bedd599ecd9       kube-scheduler-newest-cni-340756            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-340756
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-340756
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=newest-cni-340756
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_21_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:21:35 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-340756
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:21:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:21:38 +0000   Sat, 01 Nov 2025 09:21:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:21:38 +0000   Sat, 01 Nov 2025 09:21:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:21:38 +0000   Sat, 01 Nov 2025 09:21:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 09:21:38 +0000   Sat, 01 Nov 2025 09:21:34 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-340756
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                036af85e-ee16-42ad-9d1f-24aa651c4f5c
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-340756                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7s
	  kube-system                 kindnet-gjnst                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-340756             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-340756    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-wp2h9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-340756             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 1s                 kube-proxy       
	  Normal  Starting                 12s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node newest-cni-340756 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node newest-cni-340756 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x8 over 12s)  kubelet          Node newest-cni-340756 status is now: NodeHasSufficientPID
	  Normal  Starting                 7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s                 kubelet          Node newest-cni-340756 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s                 kubelet          Node newest-cni-340756 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s                 kubelet          Node newest-cni-340756 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-340756 event: Registered Node newest-cni-340756 in Controller
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.047726] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[Nov 1 08:38] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.215674] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.047939] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000023] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023915] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023867] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023880] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000033] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023884] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +4.031524] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +8.255123] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	
	
	==> etcd [2a45db34a9fe870c983de6fc512d53f3a0e7a1ef36ee91ad85e2fffdc072b3a0] <==
	{"level":"warn","ts":"2025-11-01T09:21:34.914325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:34.930452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:34.939009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:34.948227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:34.956465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:34.965260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:34.974170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:34.989235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:34.997413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:35.005839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:35.014050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:35.028953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:35.037461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:35.050452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:35.058302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:35.069413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:35.076173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:35.083859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:35.092857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:35.100309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:35.118034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:35.130756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:35.138612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:35.145985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:35.202737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45056","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:21:45 up  1:04,  0 user,  load average: 4.51, 2.97, 1.74
	Linux newest-cni-340756 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [faec9a1b6bc70e57d63655ba0b358ca80c6c7a3452235369e73c522520c3824b] <==
	I1101 09:21:44.360913       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:21:44.361191       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 09:21:44.361315       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:21:44.361329       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:21:44.361356       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:21:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:21:44.657774       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:21:44.658387       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:21:44.658408       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:21:44.658610       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:21:45.059148       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:21:45.059188       1 metrics.go:72] Registering metrics
	I1101 09:21:45.059277       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [935ac79bf2b1fa432fa43f48f463d67a1b567f2139b2ecc2167369bbf8ef5907] <==
	I1101 09:21:35.739014       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:21:35.739021       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:21:35.739027       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:21:35.740940       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:21:35.741066       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 09:21:35.745757       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:21:35.746243       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:21:35.931041       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:21:36.641887       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:21:36.645735       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:21:36.645753       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:21:37.166131       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:21:37.216348       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:21:37.336207       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:21:37.343485       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1101 09:21:37.344963       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:21:37.349590       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:21:37.676472       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:21:38.474044       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:21:38.484971       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:21:38.493856       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:21:42.679095       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:21:43.582600       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:21:43.590738       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:21:43.830781       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [2d32f2c01f9a4ba49999342a2315d27c1bd1e61b02432488955b0c0fec6ca4f6] <==
	I1101 09:21:42.661952       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:21:42.674432       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:21:42.674562       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:21:42.674579       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:21:42.674587       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:21:42.674826       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 09:21:42.675641       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:21:42.675708       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:21:42.676002       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:21:42.676123       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:21:42.676236       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:21:42.676297       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 09:21:42.677038       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:21:42.677072       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:21:42.677100       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:21:42.677191       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:21:42.679646       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:21:42.679713       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:21:42.679733       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:21:42.679766       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:21:42.679773       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:21:42.679780       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:21:42.683169       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:21:42.687360       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-340756" podCIDRs=["10.42.0.0/24"]
	I1101 09:21:42.712418       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [ed1acbdf5a853043c16fa3c710ec64c00afbe8c1f11b261f337fe74a67ff0b97] <==
	I1101 09:21:44.251284       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:21:44.321729       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:21:44.422085       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:21:44.422122       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1101 09:21:44.422227       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:21:44.443564       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:21:44.443639       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:21:44.450104       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:21:44.450478       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:21:44.450505       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:21:44.452577       1 config.go:200] "Starting service config controller"
	I1101 09:21:44.452599       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:21:44.452619       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:21:44.452655       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:21:44.452731       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:21:44.452739       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:21:44.452744       1 config.go:309] "Starting node config controller"
	I1101 09:21:44.452754       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:21:44.452762       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:21:44.552795       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:21:44.552819       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:21:44.552844       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [5aaf08865bfb18717abd2121578ee0be5d23e40aac21b626f94cf657c6c704af] <==
	E1101 09:21:35.691639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:21:35.691703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:21:35.691718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:21:35.691767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:21:35.691768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:21:35.691786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:21:35.692212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:21:35.692360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:21:35.692600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:21:35.693187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:21:35.693187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:21:35.693326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:21:35.694014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:21:36.561354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:21:36.636078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:21:36.727056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:21:36.752291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:21:36.755412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:21:36.767625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:21:36.852206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:21:36.856328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:21:36.970962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:21:36.975959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:21:37.158194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 09:21:39.689173       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:21:38 newest-cni-340756 kubelet[1319]: I1101 09:21:38.588286    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/acae05e95497f02b3a31cae268a25f93-k8s-certs\") pod \"kube-controller-manager-newest-cni-340756\" (UID: \"acae05e95497f02b3a31cae268a25f93\") " pod="kube-system/kube-controller-manager-newest-cni-340756"
	Nov 01 09:21:39 newest-cni-340756 kubelet[1319]: I1101 09:21:39.278537    1319 apiserver.go:52] "Watching apiserver"
	Nov 01 09:21:39 newest-cni-340756 kubelet[1319]: I1101 09:21:39.287904    1319 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 09:21:39 newest-cni-340756 kubelet[1319]: I1101 09:21:39.323669    1319 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-340756"
	Nov 01 09:21:39 newest-cni-340756 kubelet[1319]: I1101 09:21:39.324240    1319 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-340756"
	Nov 01 09:21:39 newest-cni-340756 kubelet[1319]: I1101 09:21:39.324358    1319 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-340756"
	Nov 01 09:21:39 newest-cni-340756 kubelet[1319]: E1101 09:21:39.335483    1319 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-340756\" already exists" pod="kube-system/etcd-newest-cni-340756"
	Nov 01 09:21:39 newest-cni-340756 kubelet[1319]: E1101 09:21:39.336555    1319 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-340756\" already exists" pod="kube-system/kube-apiserver-newest-cni-340756"
	Nov 01 09:21:39 newest-cni-340756 kubelet[1319]: E1101 09:21:39.336592    1319 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-340756\" already exists" pod="kube-system/kube-scheduler-newest-cni-340756"
	Nov 01 09:21:39 newest-cni-340756 kubelet[1319]: I1101 09:21:39.353305    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-340756" podStartSLOduration=1.353267395 podStartE2EDuration="1.353267395s" podCreationTimestamp="2025-11-01 09:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:21:39.352792518 +0000 UTC m=+1.135028714" watchObservedRunningTime="2025-11-01 09:21:39.353267395 +0000 UTC m=+1.135503594"
	Nov 01 09:21:39 newest-cni-340756 kubelet[1319]: I1101 09:21:39.365625    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-340756" podStartSLOduration=1.365600677 podStartE2EDuration="1.365600677s" podCreationTimestamp="2025-11-01 09:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:21:39.365488188 +0000 UTC m=+1.147724373" watchObservedRunningTime="2025-11-01 09:21:39.365600677 +0000 UTC m=+1.147836868"
	Nov 01 09:21:39 newest-cni-340756 kubelet[1319]: I1101 09:21:39.395394    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-340756" podStartSLOduration=1.395357244 podStartE2EDuration="1.395357244s" podCreationTimestamp="2025-11-01 09:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:21:39.379289832 +0000 UTC m=+1.161526030" watchObservedRunningTime="2025-11-01 09:21:39.395357244 +0000 UTC m=+1.177593441"
	Nov 01 09:21:39 newest-cni-340756 kubelet[1319]: I1101 09:21:39.413176    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-340756" podStartSLOduration=1.413150745 podStartE2EDuration="1.413150745s" podCreationTimestamp="2025-11-01 09:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:21:39.395316553 +0000 UTC m=+1.177552750" watchObservedRunningTime="2025-11-01 09:21:39.413150745 +0000 UTC m=+1.195386936"
	Nov 01 09:21:42 newest-cni-340756 kubelet[1319]: I1101 09:21:42.741360    1319 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 09:21:42 newest-cni-340756 kubelet[1319]: I1101 09:21:42.742145    1319 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 09:21:43 newest-cni-340756 kubelet[1319]: I1101 09:21:43.926127    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6a908ac-4dfb-4f1c-8059-79695562a817-lib-modules\") pod \"kube-proxy-wp2h9\" (UID: \"e6a908ac-4dfb-4f1c-8059-79695562a817\") " pod="kube-system/kube-proxy-wp2h9"
	Nov 01 09:21:43 newest-cni-340756 kubelet[1319]: I1101 09:21:43.926193    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9c4e4a33-eff1-47ec-94bc-7f9196c547ff-cni-cfg\") pod \"kindnet-gjnst\" (UID: \"9c4e4a33-eff1-47ec-94bc-7f9196c547ff\") " pod="kube-system/kindnet-gjnst"
	Nov 01 09:21:43 newest-cni-340756 kubelet[1319]: I1101 09:21:43.926216    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c4e4a33-eff1-47ec-94bc-7f9196c547ff-lib-modules\") pod \"kindnet-gjnst\" (UID: \"9c4e4a33-eff1-47ec-94bc-7f9196c547ff\") " pod="kube-system/kindnet-gjnst"
	Nov 01 09:21:43 newest-cni-340756 kubelet[1319]: I1101 09:21:43.926240    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stwv9\" (UniqueName: \"kubernetes.io/projected/9c4e4a33-eff1-47ec-94bc-7f9196c547ff-kube-api-access-stwv9\") pod \"kindnet-gjnst\" (UID: \"9c4e4a33-eff1-47ec-94bc-7f9196c547ff\") " pod="kube-system/kindnet-gjnst"
	Nov 01 09:21:43 newest-cni-340756 kubelet[1319]: I1101 09:21:43.926474    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e6a908ac-4dfb-4f1c-8059-79695562a817-kube-proxy\") pod \"kube-proxy-wp2h9\" (UID: \"e6a908ac-4dfb-4f1c-8059-79695562a817\") " pod="kube-system/kube-proxy-wp2h9"
	Nov 01 09:21:43 newest-cni-340756 kubelet[1319]: I1101 09:21:43.926579    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6a908ac-4dfb-4f1c-8059-79695562a817-xtables-lock\") pod \"kube-proxy-wp2h9\" (UID: \"e6a908ac-4dfb-4f1c-8059-79695562a817\") " pod="kube-system/kube-proxy-wp2h9"
	Nov 01 09:21:43 newest-cni-340756 kubelet[1319]: I1101 09:21:43.926649    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c4e4a33-eff1-47ec-94bc-7f9196c547ff-xtables-lock\") pod \"kindnet-gjnst\" (UID: \"9c4e4a33-eff1-47ec-94bc-7f9196c547ff\") " pod="kube-system/kindnet-gjnst"
	Nov 01 09:21:43 newest-cni-340756 kubelet[1319]: I1101 09:21:43.926676    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2gg5\" (UniqueName: \"kubernetes.io/projected/e6a908ac-4dfb-4f1c-8059-79695562a817-kube-api-access-z2gg5\") pod \"kube-proxy-wp2h9\" (UID: \"e6a908ac-4dfb-4f1c-8059-79695562a817\") " pod="kube-system/kube-proxy-wp2h9"
	Nov 01 09:21:44 newest-cni-340756 kubelet[1319]: I1101 09:21:44.345057    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gjnst" podStartSLOduration=1.345032967 podStartE2EDuration="1.345032967s" podCreationTimestamp="2025-11-01 09:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:21:44.344966473 +0000 UTC m=+6.127202675" watchObservedRunningTime="2025-11-01 09:21:44.345032967 +0000 UTC m=+6.127269162"
	Nov 01 09:21:44 newest-cni-340756 kubelet[1319]: I1101 09:21:44.355738    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wp2h9" podStartSLOduration=1.355715225 podStartE2EDuration="1.355715225s" podCreationTimestamp="2025-11-01 09:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:21:44.355525456 +0000 UTC m=+6.137761651" watchObservedRunningTime="2025-11-01 09:21:44.355715225 +0000 UTC m=+6.137951421"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-340756 -n newest-cni-340756
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-340756 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-tmnp2 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-340756 describe pod coredns-66bc5c9577-tmnp2 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-340756 describe pod coredns-66bc5c9577-tmnp2 storage-provisioner: exit status 1 (63.781582ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-tmnp2" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-340756 describe pod coredns-66bc5c9577-tmnp2 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-236314 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-236314 --alsologtostderr -v=1: exit status 80 (2.252548732s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-236314 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:21:55.371930  272292 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:21:55.372074  272292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:21:55.372086  272292 out.go:374] Setting ErrFile to fd 2...
	I1101 09:21:55.372094  272292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:21:55.372345  272292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:21:55.372601  272292 out.go:368] Setting JSON to false
	I1101 09:21:55.372656  272292 mustload.go:66] Loading cluster: embed-certs-236314
	I1101 09:21:55.373041  272292 config.go:182] Loaded profile config "embed-certs-236314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:55.373451  272292 cli_runner.go:164] Run: docker container inspect embed-certs-236314 --format={{.State.Status}}
	I1101 09:21:55.394088  272292 host.go:66] Checking if "embed-certs-236314" exists ...
	I1101 09:21:55.394415  272292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:21:55.464418  272292 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:86 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-01 09:21:55.45267587 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:21:55.465096  272292 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-236314 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 09:21:55.467036  272292 out.go:179] * Pausing node embed-certs-236314 ... 
	I1101 09:21:55.469021  272292 host.go:66] Checking if "embed-certs-236314" exists ...
	I1101 09:21:55.469410  272292 ssh_runner.go:195] Run: systemctl --version
	I1101 09:21:55.469464  272292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-236314
	I1101 09:21:55.490769  272292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/embed-certs-236314/id_rsa Username:docker}
	I1101 09:21:55.592113  272292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:21:55.605458  272292 pause.go:52] kubelet running: true
	I1101 09:21:55.605514  272292 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:21:55.783237  272292 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:21:55.783336  272292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:21:55.856441  272292 cri.go:89] found id: "6c99ae25ef0e9393dddf231085bca13268e9d35c7587c2535d9874ef0b8bc855"
	I1101 09:21:55.856467  272292 cri.go:89] found id: "8d11e282bdb581fc10907660c4ed84334e43ee3c72fbd91f47dfa5bd7fadf948"
	I1101 09:21:55.856471  272292 cri.go:89] found id: "c76b1cf0e992cc091c5557f5c0067cc245d9e9be10f9683721fbc495f757f1dd"
	I1101 09:21:55.856474  272292 cri.go:89] found id: "ff5eeb3598d0ee0d8632ba6b2c43ba490782a9c06cdab6d790fbd85ba9094d8e"
	I1101 09:21:55.856477  272292 cri.go:89] found id: "cf0921be2c864b0ad5e89bbcde93cfdeb7214cf2e8fbeeb40447ed91e7d93636"
	I1101 09:21:55.856480  272292 cri.go:89] found id: "bca3056e4356124989f2b2cba8377cf3f660970574583fcca877cb776005e6ca"
	I1101 09:21:55.856482  272292 cri.go:89] found id: "cdf866b372073a7755ed447cdf8634d89a5c22e16db02cc9cfe7c76643d51a6c"
	I1101 09:21:55.856484  272292 cri.go:89] found id: "c53066ca825ef150c1b3480d4c681c275883620b56bfc97b3e50480bdd6dc761"
	I1101 09:21:55.856486  272292 cri.go:89] found id: "63c22508cf7059b3b3f3d3dca5c0c8bae9ba37801ed8914d301b3b69f0fc7f4d"
	I1101 09:21:55.856492  272292 cri.go:89] found id: "97e232d23f29552301319ab346cf13e85b89566b637b24177cd78bdfb630fd2f"
	I1101 09:21:55.856494  272292 cri.go:89] found id: "ab9a1c2871ebfdf28b52214510e1799784842fc9e5a2a4f8ac62fa64668e5010"
	I1101 09:21:55.856497  272292 cri.go:89] found id: ""
	I1101 09:21:55.856532  272292 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:21:55.869759  272292 retry.go:31] will retry after 228.994187ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:21:55Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:21:56.099299  272292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:21:56.112822  272292 pause.go:52] kubelet running: false
	I1101 09:21:56.112907  272292 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:21:56.262558  272292 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:21:56.262642  272292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:21:56.332770  272292 cri.go:89] found id: "6c99ae25ef0e9393dddf231085bca13268e9d35c7587c2535d9874ef0b8bc855"
	I1101 09:21:56.332797  272292 cri.go:89] found id: "8d11e282bdb581fc10907660c4ed84334e43ee3c72fbd91f47dfa5bd7fadf948"
	I1101 09:21:56.332803  272292 cri.go:89] found id: "c76b1cf0e992cc091c5557f5c0067cc245d9e9be10f9683721fbc495f757f1dd"
	I1101 09:21:56.332809  272292 cri.go:89] found id: "ff5eeb3598d0ee0d8632ba6b2c43ba490782a9c06cdab6d790fbd85ba9094d8e"
	I1101 09:21:56.332813  272292 cri.go:89] found id: "cf0921be2c864b0ad5e89bbcde93cfdeb7214cf2e8fbeeb40447ed91e7d93636"
	I1101 09:21:56.332818  272292 cri.go:89] found id: "bca3056e4356124989f2b2cba8377cf3f660970574583fcca877cb776005e6ca"
	I1101 09:21:56.332823  272292 cri.go:89] found id: "cdf866b372073a7755ed447cdf8634d89a5c22e16db02cc9cfe7c76643d51a6c"
	I1101 09:21:56.332828  272292 cri.go:89] found id: "c53066ca825ef150c1b3480d4c681c275883620b56bfc97b3e50480bdd6dc761"
	I1101 09:21:56.332832  272292 cri.go:89] found id: "63c22508cf7059b3b3f3d3dca5c0c8bae9ba37801ed8914d301b3b69f0fc7f4d"
	I1101 09:21:56.332848  272292 cri.go:89] found id: "97e232d23f29552301319ab346cf13e85b89566b637b24177cd78bdfb630fd2f"
	I1101 09:21:56.332856  272292 cri.go:89] found id: "ab9a1c2871ebfdf28b52214510e1799784842fc9e5a2a4f8ac62fa64668e5010"
	I1101 09:21:56.332860  272292 cri.go:89] found id: ""
	I1101 09:21:56.332920  272292 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:21:56.345731  272292 retry.go:31] will retry after 297.572556ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:21:56Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:21:56.644265  272292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:21:56.657965  272292 pause.go:52] kubelet running: false
	I1101 09:21:56.658035  272292 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:21:56.809928  272292 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:21:56.810053  272292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:21:56.883853  272292 cri.go:89] found id: "6c99ae25ef0e9393dddf231085bca13268e9d35c7587c2535d9874ef0b8bc855"
	I1101 09:21:56.883898  272292 cri.go:89] found id: "8d11e282bdb581fc10907660c4ed84334e43ee3c72fbd91f47dfa5bd7fadf948"
	I1101 09:21:56.883905  272292 cri.go:89] found id: "c76b1cf0e992cc091c5557f5c0067cc245d9e9be10f9683721fbc495f757f1dd"
	I1101 09:21:56.883910  272292 cri.go:89] found id: "ff5eeb3598d0ee0d8632ba6b2c43ba490782a9c06cdab6d790fbd85ba9094d8e"
	I1101 09:21:56.883915  272292 cri.go:89] found id: "cf0921be2c864b0ad5e89bbcde93cfdeb7214cf2e8fbeeb40447ed91e7d93636"
	I1101 09:21:56.883921  272292 cri.go:89] found id: "bca3056e4356124989f2b2cba8377cf3f660970574583fcca877cb776005e6ca"
	I1101 09:21:56.883925  272292 cri.go:89] found id: "cdf866b372073a7755ed447cdf8634d89a5c22e16db02cc9cfe7c76643d51a6c"
	I1101 09:21:56.883930  272292 cri.go:89] found id: "c53066ca825ef150c1b3480d4c681c275883620b56bfc97b3e50480bdd6dc761"
	I1101 09:21:56.883934  272292 cri.go:89] found id: "63c22508cf7059b3b3f3d3dca5c0c8bae9ba37801ed8914d301b3b69f0fc7f4d"
	I1101 09:21:56.883942  272292 cri.go:89] found id: "97e232d23f29552301319ab346cf13e85b89566b637b24177cd78bdfb630fd2f"
	I1101 09:21:56.883950  272292 cri.go:89] found id: "ab9a1c2871ebfdf28b52214510e1799784842fc9e5a2a4f8ac62fa64668e5010"
	I1101 09:21:56.883955  272292 cri.go:89] found id: ""
	I1101 09:21:56.883996  272292 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:21:56.896126  272292 retry.go:31] will retry after 402.31324ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:21:56Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:21:57.298748  272292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:21:57.312210  272292 pause.go:52] kubelet running: false
	I1101 09:21:57.312266  272292 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:21:57.454609  272292 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:21:57.454693  272292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:21:57.537941  272292 cri.go:89] found id: "6c99ae25ef0e9393dddf231085bca13268e9d35c7587c2535d9874ef0b8bc855"
	I1101 09:21:57.537968  272292 cri.go:89] found id: "8d11e282bdb581fc10907660c4ed84334e43ee3c72fbd91f47dfa5bd7fadf948"
	I1101 09:21:57.537974  272292 cri.go:89] found id: "c76b1cf0e992cc091c5557f5c0067cc245d9e9be10f9683721fbc495f757f1dd"
	I1101 09:21:57.537982  272292 cri.go:89] found id: "ff5eeb3598d0ee0d8632ba6b2c43ba490782a9c06cdab6d790fbd85ba9094d8e"
	I1101 09:21:57.537987  272292 cri.go:89] found id: "cf0921be2c864b0ad5e89bbcde93cfdeb7214cf2e8fbeeb40447ed91e7d93636"
	I1101 09:21:57.537992  272292 cri.go:89] found id: "bca3056e4356124989f2b2cba8377cf3f660970574583fcca877cb776005e6ca"
	I1101 09:21:57.538005  272292 cri.go:89] found id: "cdf866b372073a7755ed447cdf8634d89a5c22e16db02cc9cfe7c76643d51a6c"
	I1101 09:21:57.538007  272292 cri.go:89] found id: "c53066ca825ef150c1b3480d4c681c275883620b56bfc97b3e50480bdd6dc761"
	I1101 09:21:57.538010  272292 cri.go:89] found id: "63c22508cf7059b3b3f3d3dca5c0c8bae9ba37801ed8914d301b3b69f0fc7f4d"
	I1101 09:21:57.538015  272292 cri.go:89] found id: "97e232d23f29552301319ab346cf13e85b89566b637b24177cd78bdfb630fd2f"
	I1101 09:21:57.538018  272292 cri.go:89] found id: "ab9a1c2871ebfdf28b52214510e1799784842fc9e5a2a4f8ac62fa64668e5010"
	I1101 09:21:57.538020  272292 cri.go:89] found id: ""
	I1101 09:21:57.538060  272292 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:21:57.553102  272292 out.go:203] 
	W1101 09:21:57.554427  272292 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:21:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:21:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:21:57.554445  272292 out.go:285] * 
	* 
	W1101 09:21:57.558486  272292 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:21:57.559742  272292 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-236314 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-236314
helpers_test.go:243: (dbg) docker inspect embed-certs-236314:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64",
	        "Created": "2025-11-01T09:19:56.919781471Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 256444,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:20:59.66866613Z",
	            "FinishedAt": "2025-11-01T09:20:58.750556681Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64/hosts",
	        "LogPath": "/var/lib/docker/containers/9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64/9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64-json.log",
	        "Name": "/embed-certs-236314",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-236314:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-236314",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64",
	                "LowerDir": "/var/lib/docker/overlay2/058db38a3e51e77a68a2911f27d674e0411b25d26e2fe50bb66959a3e62a7c04-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/058db38a3e51e77a68a2911f27d674e0411b25d26e2fe50bb66959a3e62a7c04/merged",
	                "UpperDir": "/var/lib/docker/overlay2/058db38a3e51e77a68a2911f27d674e0411b25d26e2fe50bb66959a3e62a7c04/diff",
	                "WorkDir": "/var/lib/docker/overlay2/058db38a3e51e77a68a2911f27d674e0411b25d26e2fe50bb66959a3e62a7c04/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-236314",
	                "Source": "/var/lib/docker/volumes/embed-certs-236314/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-236314",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-236314",
	                "name.minikube.sigs.k8s.io": "embed-certs-236314",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2757941023d9cf67183fa060e6ff1d75306699f398afbf89ce4bd002b69d1655",
	            "SandboxKey": "/var/run/docker/netns/2757941023d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-236314": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:37:5a:d3:4f:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2f536846b22cd19ee4958cff8ea6caf971d5b2fed6041edde3ccc625d2886d4f",
	                    "EndpointID": "a6b2dd490befa02bad0495d574ab23cf733c1a8cc81f831965f0f0f597b0a4b3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-236314",
	                        "9e1a1d183903"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-236314 -n embed-certs-236314
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-236314 -n embed-certs-236314: exit status 2 (345.045819ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-236314 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-236314 logs -n 25: (1.373699531s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-397460 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p embed-certs-236314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-152344 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ start   │ -p old-k8s-version-152344 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p no-preload-397460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-236314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │                     │
	│ stop    │ -p embed-certs-236314 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-236314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p embed-certs-236314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:21 UTC │
	│ image   │ old-k8s-version-152344 image list --format=json                                                                                                                                                                                               │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p old-k8s-version-152344 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ image   │ no-preload-397460 image list --format=json                                                                                                                                                                                                    │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p no-preload-397460 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ delete  │ -p old-k8s-version-152344                                                                                                                                                                                                                     │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p old-k8s-version-152344                                                                                                                                                                                                                     │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p no-preload-397460                                                                                                                                                                                                                          │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p disable-driver-mounts-366530                                                                                                                                                                                                               │ disable-driver-mounts-366530 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ start   │ -p default-k8s-diff-port-648641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-648641 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p no-preload-397460                                                                                                                                                                                                                          │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ start   │ -p newest-cni-340756 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-340756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ stop    │ -p newest-cni-340756 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ image   │ embed-certs-236314 image list --format=json                                                                                                                                                                                                   │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p embed-certs-236314 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:21:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:21:16.432075  263568 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:21:16.432846  263568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:21:16.432891  263568 out.go:374] Setting ErrFile to fd 2...
	I1101 09:21:16.432898  263568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:21:16.433460  263568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:21:16.434584  263568 out.go:368] Setting JSON to false
	I1101 09:21:16.436204  263568 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3824,"bootTime":1761985052,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:21:16.436384  263568 start.go:143] virtualization: kvm guest
	I1101 09:21:16.463856  263568 out.go:179] * [newest-cni-340756] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:21:16.469043  263568 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:21:16.469051  263568 notify.go:221] Checking for updates...
	I1101 09:21:16.473202  263568 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:21:16.475197  263568 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:21:16.477469  263568 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:21:16.479064  263568 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:21:16.482076  263568 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:21:16.485231  263568 config.go:182] Loaded profile config "default-k8s-diff-port-648641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:16.485374  263568 config.go:182] Loaded profile config "embed-certs-236314": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:16.485491  263568 config.go:182] Loaded profile config "kubernetes-upgrade-846924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:16.485633  263568 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:21:16.524156  263568 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:21:16.524472  263568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:21:16.630320  263568 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:67 SystemTime:2025-11-01 09:21:16.616732342 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:21:16.630455  263568 docker.go:319] overlay module found
	I1101 09:21:16.632143  263568 out.go:179] * Using the docker driver based on user configuration
	I1101 09:21:16.634719  263568 start.go:309] selected driver: docker
	I1101 09:21:16.634742  263568 start.go:930] validating driver "docker" against <nil>
	I1101 09:21:16.634759  263568 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:21:16.636387  263568 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:21:16.721124  263568 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:69 SystemTime:2025-11-01 09:21:16.708216677 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:21:16.721343  263568 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1101 09:21:16.721379  263568 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1101 09:21:16.721687  263568 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:21:16.753372  263568 out.go:179] * Using Docker driver with root privileges
	I1101 09:21:16.775951  263568 cni.go:84] Creating CNI manager for ""
	I1101 09:21:16.776069  263568 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:21:16.776087  263568 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:21:16.776251  263568 start.go:353] cluster config:
	{Name:newest-cni-340756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-340756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:21:16.798022  263568 out.go:179] * Starting "newest-cni-340756" primary control-plane node in "newest-cni-340756" cluster
	I1101 09:21:16.804595  263568 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:21:16.807047  263568 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:21:16.808657  263568 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:21:16.808716  263568 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:21:16.808751  263568 cache.go:59] Caching tarball of preloaded images
	I1101 09:21:16.808757  263568 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:21:16.808899  263568 preload.go:233] Found /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:21:16.808918  263568 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:21:16.809051  263568 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/config.json ...
	I1101 09:21:16.809077  263568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/config.json: {Name:mk5907ec4f9df3976ba620184d9e796a35524126 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:16.838145  263568 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:21:16.838175  263568 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:21:16.838197  263568 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:21:16.838225  263568 start.go:360] acquireMachinesLock for newest-cni-340756: {Name:mk88172481da3b8a8d740f548867bdcc84a2d863 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:21:16.838330  263568 start.go:364] duration metric: took 81.283µs to acquireMachinesLock for "newest-cni-340756"
	I1101 09:21:16.838359  263568 start.go:93] Provisioning new machine with config: &{Name:newest-cni-340756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-340756 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:21:16.838440  263568 start.go:125] createHost starting for "" (driver="docker")
	W1101 09:21:14.570607  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	W1101 09:21:16.573319  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	W1101 09:21:19.069347  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	I1101 09:21:16.189640  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:16.190173  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:16.190234  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:16.190294  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:16.226313  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:16.226342  216020 cri.go:89] found id: ""
	I1101 09:21:16.226352  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:16.226408  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:16.231537  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:16.231607  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:16.266019  216020 cri.go:89] found id: ""
	I1101 09:21:16.266047  216020 logs.go:282] 0 containers: []
	W1101 09:21:16.266058  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:16.266066  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:16.266127  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:16.302134  216020 cri.go:89] found id: ""
	I1101 09:21:16.302164  216020 logs.go:282] 0 containers: []
	W1101 09:21:16.302176  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:16.302183  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:16.302241  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:16.332061  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:16.332085  216020 cri.go:89] found id: ""
	I1101 09:21:16.332095  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:16.332150  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:16.337277  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:16.337342  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:16.376616  216020 cri.go:89] found id: ""
	I1101 09:21:16.376645  216020 logs.go:282] 0 containers: []
	W1101 09:21:16.376656  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:16.376664  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:16.376726  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:16.413540  216020 cri.go:89] found id: "df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:16.413565  216020 cri.go:89] found id: ""
	I1101 09:21:16.413575  216020 logs.go:282] 1 containers: [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd]
	I1101 09:21:16.413631  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:16.418883  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:16.418954  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:16.454886  216020 cri.go:89] found id: ""
	I1101 09:21:16.454916  216020 logs.go:282] 0 containers: []
	W1101 09:21:16.454932  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:16.454940  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:16.455000  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:16.502922  216020 cri.go:89] found id: ""
	I1101 09:21:16.502946  216020 logs.go:282] 0 containers: []
	W1101 09:21:16.502966  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:16.502979  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:16.502993  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:16.676611  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:16.676660  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:16.701528  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:16.701584  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:16.772358  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:16.772385  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:16.772403  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:16.813480  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:16.813522  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:16.895592  216020 logs.go:123] Gathering logs for kube-controller-manager [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd] ...
	I1101 09:21:16.895634  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:16.931448  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:16.931486  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:17.011546  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:17.011592  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:19.550953  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:19.551460  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:19.551521  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:19.551586  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:19.591105  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:19.591131  216020 cri.go:89] found id: ""
	I1101 09:21:19.591141  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:19.591200  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:19.597977  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:19.598052  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:19.634859  216020 cri.go:89] found id: ""
	I1101 09:21:19.634907  216020 logs.go:282] 0 containers: []
	W1101 09:21:19.634918  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:19.634926  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:19.634991  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:19.674355  216020 cri.go:89] found id: ""
	I1101 09:21:19.674384  216020 logs.go:282] 0 containers: []
	W1101 09:21:19.674396  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:19.674404  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:19.674462  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:19.712970  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:19.712994  216020 cri.go:89] found id: ""
	I1101 09:21:19.713004  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:19.713069  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:19.718651  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:19.718745  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:19.759757  216020 cri.go:89] found id: ""
	I1101 09:21:19.759792  216020 logs.go:282] 0 containers: []
	W1101 09:21:19.759803  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:19.759811  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:19.759900  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:19.797006  216020 cri.go:89] found id: "b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:19.797035  216020 cri.go:89] found id: "df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:19.797042  216020 cri.go:89] found id: ""
	I1101 09:21:19.797053  216020 logs.go:282] 2 containers: [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd]
	I1101 09:21:19.797124  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:19.802830  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:19.807973  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:19.808057  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:16.486132  262357 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-648641:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.512570954s)
	I1101 09:21:16.486168  262357 kic.go:203] duration metric: took 4.512735838s to extract preloaded images to volume ...
	W1101 09:21:16.486267  262357 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 09:21:16.486310  262357 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 09:21:16.486349  262357 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:21:16.597382  262357 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-648641 --name default-k8s-diff-port-648641 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-648641 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-648641 --network default-k8s-diff-port-648641 --ip 192.168.103.2 --volume default-k8s-diff-port-648641:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:21:17.147634  262357 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-648641 --format={{.State.Running}}
	I1101 09:21:17.171138  262357 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-648641 --format={{.State.Status}}
	I1101 09:21:17.192030  262357 cli_runner.go:164] Run: docker exec default-k8s-diff-port-648641 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:21:17.248917  262357 oci.go:144] the created container "default-k8s-diff-port-648641" has a running status.
	I1101 09:21:17.248965  262357 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/default-k8s-diff-port-648641/id_rsa...
	I1101 09:21:17.431433  262357 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-5913/.minikube/machines/default-k8s-diff-port-648641/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:21:17.466511  262357 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-648641 --format={{.State.Status}}
	I1101 09:21:17.487440  262357 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:21:17.487463  262357 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-648641 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:21:17.561751  262357 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-648641 --format={{.State.Status}}
	I1101 09:21:17.587513  262357 machine.go:94] provisionDockerMachine start ...
	I1101 09:21:17.587618  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:17.615964  262357 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:17.616207  262357 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1101 09:21:17.616217  262357 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:21:17.772779  262357 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-648641
	
	I1101 09:21:17.772810  262357 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-648641"
	I1101 09:21:17.772955  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:17.799035  262357 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:17.799440  262357 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1101 09:21:17.799462  262357 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-648641 && echo "default-k8s-diff-port-648641" | sudo tee /etc/hostname
	I1101 09:21:17.969701  262357 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-648641
	
	I1101 09:21:17.969822  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:17.998371  262357 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:17.999222  262357 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1101 09:21:17.999273  262357 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-648641' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-648641/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-648641' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:21:18.165152  262357 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:21:18.165188  262357 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5913/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5913/.minikube}
	I1101 09:21:18.165214  262357 ubuntu.go:190] setting up certificates
	I1101 09:21:18.165227  262357 provision.go:84] configureAuth start
	I1101 09:21:18.165300  262357 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-648641
	I1101 09:21:18.190725  262357 provision.go:143] copyHostCerts
	I1101 09:21:18.190827  262357 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem, removing ...
	I1101 09:21:18.190844  262357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem
	I1101 09:21:18.190924  262357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem (1078 bytes)
	I1101 09:21:18.191055  262357 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem, removing ...
	I1101 09:21:18.191067  262357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem
	I1101 09:21:18.191108  262357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem (1123 bytes)
	I1101 09:21:18.191198  262357 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem, removing ...
	I1101 09:21:18.191209  262357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem
	I1101 09:21:18.191244  262357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem (1675 bytes)
	I1101 09:21:18.191324  262357 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-648641 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-648641 localhost minikube]
	I1101 09:21:18.738803  262357 provision.go:177] copyRemoteCerts
	I1101 09:21:18.738895  262357 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:21:18.738942  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:18.761365  262357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/default-k8s-diff-port-648641/id_rsa Username:docker}
	I1101 09:21:18.877913  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:21:18.906075  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:21:18.932027  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 09:21:18.959236  262357 provision.go:87] duration metric: took 793.970541ms to configureAuth
	I1101 09:21:18.959276  262357 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:21:18.959483  262357 config.go:182] Loaded profile config "default-k8s-diff-port-648641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:18.959635  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:18.983609  262357 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:18.983949  262357 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1101 09:21:18.984252  262357 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:21:19.308095  262357 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:21:19.308130  262357 machine.go:97] duration metric: took 1.720592129s to provisionDockerMachine
	I1101 09:21:19.308144  262357 client.go:176] duration metric: took 7.925831106s to LocalClient.Create
	I1101 09:21:19.308166  262357 start.go:167] duration metric: took 7.925916594s to libmachine.API.Create "default-k8s-diff-port-648641"
	I1101 09:21:19.308176  262357 start.go:293] postStartSetup for "default-k8s-diff-port-648641" (driver="docker")
	I1101 09:21:19.308195  262357 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:21:19.308281  262357 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:21:19.308328  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:19.331386  262357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/default-k8s-diff-port-648641/id_rsa Username:docker}
	I1101 09:21:19.444341  262357 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:21:19.449749  262357 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:21:19.449784  262357 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:21:19.449798  262357 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/addons for local assets ...
	I1101 09:21:19.449854  262357 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/files for local assets ...
	I1101 09:21:19.449995  262357 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem -> 94142.pem in /etc/ssl/certs
	I1101 09:21:19.450122  262357 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:21:19.461479  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:21:19.491051  262357 start.go:296] duration metric: took 182.858998ms for postStartSetup
	I1101 09:21:19.491534  262357 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-648641
	I1101 09:21:19.517028  262357 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/config.json ...
	I1101 09:21:19.517349  262357 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:21:19.517408  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:19.541507  262357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/default-k8s-diff-port-648641/id_rsa Username:docker}
	I1101 09:21:19.652734  262357 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:21:19.659157  262357 start.go:128] duration metric: took 8.28005502s to createHost
	I1101 09:21:19.659188  262357 start.go:83] releasing machines lock for "default-k8s-diff-port-648641", held for 8.280250134s
	I1101 09:21:19.659267  262357 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-648641
	I1101 09:21:19.684195  262357 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:21:19.684481  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:19.684123  262357 ssh_runner.go:195] Run: cat /version.json
	I1101 09:21:19.684718  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:19.710690  262357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/default-k8s-diff-port-648641/id_rsa Username:docker}
	I1101 09:21:19.712349  262357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/default-k8s-diff-port-648641/id_rsa Username:docker}
	I1101 09:21:19.823400  262357 ssh_runner.go:195] Run: systemctl --version
	I1101 09:21:19.908689  262357 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:21:19.964047  262357 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:21:19.970327  262357 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:21:19.970400  262357 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:21:20.014665  262357 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:21:20.014764  262357 start.go:496] detecting cgroup driver to use...
	I1101 09:21:20.014930  262357 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:21:20.015010  262357 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:21:20.038534  262357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:21:20.057741  262357 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:21:20.057798  262357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:21:20.085136  262357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:21:20.109896  262357 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:21:20.247952  262357 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:21:20.391536  262357 docker.go:234] disabling docker service ...
	I1101 09:21:20.391616  262357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:21:20.420963  262357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:21:20.440313  262357 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:21:20.570652  262357 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:21:20.691676  262357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:21:20.709716  262357 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:21:20.730739  262357 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:21:20.730822  262357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:20.842859  262357 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:21:20.842984  262357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:20.904660  262357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:20.960656  262357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:21.017527  262357 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:21:21.027075  262357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:21.038255  262357 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:21.060709  262357 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:21.087186  262357 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:21:21.096178  262357 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:21:21.104672  262357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:21:16.840701  263568 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:21:16.841054  263568 start.go:159] libmachine.API.Create for "newest-cni-340756" (driver="docker")
	I1101 09:21:16.841107  263568 client.go:173] LocalClient.Create starting
	I1101 09:21:16.841182  263568 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem
	I1101 09:21:16.841215  263568 main.go:143] libmachine: Decoding PEM data...
	I1101 09:21:16.841233  263568 main.go:143] libmachine: Parsing certificate...
	I1101 09:21:16.841282  263568 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem
	I1101 09:21:16.841300  263568 main.go:143] libmachine: Decoding PEM data...
	I1101 09:21:16.841312  263568 main.go:143] libmachine: Parsing certificate...
	I1101 09:21:16.841698  263568 cli_runner.go:164] Run: docker network inspect newest-cni-340756 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:21:16.867231  263568 cli_runner.go:211] docker network inspect newest-cni-340756 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:21:16.867316  263568 network_create.go:284] running [docker network inspect newest-cni-340756] to gather additional debugging logs...
	I1101 09:21:16.867341  263568 cli_runner.go:164] Run: docker network inspect newest-cni-340756
	W1101 09:21:16.892738  263568 cli_runner.go:211] docker network inspect newest-cni-340756 returned with exit code 1
	I1101 09:21:16.892791  263568 network_create.go:287] error running [docker network inspect newest-cni-340756]: docker network inspect newest-cni-340756: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-340756 not found
	I1101 09:21:16.892808  263568 network_create.go:289] output of [docker network inspect newest-cni-340756]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-340756 not found
	
	** /stderr **
	I1101 09:21:16.892947  263568 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:21:16.918760  263568 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5f44df6b5a5b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:38:92:20:b3:ae} reservation:<nil>}
	I1101 09:21:16.919791  263568 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ec772021a1d5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:14:7e:99:b1:e5} reservation:<nil>}
	I1101 09:21:16.920676  263568 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6ef14c0d2e1a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:5b:36:d5:85:2b} reservation:<nil>}
	I1101 09:21:16.921315  263568 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-2f536846b22c IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:82:b1:bc:21:0c:bb} reservation:<nil>}
	I1101 09:21:16.921649  263568 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-c9feba7a919c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:a6:96:07:ef:ec:1e} reservation:<nil>}
	I1101 09:21:16.922539  263568 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dc8f80}
	I1101 09:21:16.922561  263568 network_create.go:124] attempt to create docker network newest-cni-340756 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1101 09:21:16.922611  263568 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-340756 newest-cni-340756
	I1101 09:21:17.005643  263568 network_create.go:108] docker network newest-cni-340756 192.168.94.0/24 created
	I1101 09:21:17.005678  263568 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-340756" container
	I1101 09:21:17.005776  263568 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:21:17.029571  263568 cli_runner.go:164] Run: docker volume create newest-cni-340756 --label name.minikube.sigs.k8s.io=newest-cni-340756 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:21:17.051568  263568 oci.go:103] Successfully created a docker volume newest-cni-340756
	I1101 09:21:17.051642  263568 cli_runner.go:164] Run: docker run --rm --name newest-cni-340756-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-340756 --entrypoint /usr/bin/test -v newest-cni-340756:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:21:17.571227  263568 oci.go:107] Successfully prepared a docker volume newest-cni-340756
	I1101 09:21:17.571380  263568 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:21:17.571414  263568 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:21:17.571510  263568 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-340756:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 09:21:21.188552  262357 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:21:23.490005  262357 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.301416841s)
	I1101 09:21:23.490041  262357 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:21:23.490092  262357 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:21:23.494842  262357 start.go:564] Will wait 60s for crictl version
	I1101 09:21:23.494920  262357 ssh_runner.go:195] Run: which crictl
	I1101 09:21:23.499217  262357 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:21:23.534142  262357 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:21:23.534230  262357 ssh_runner.go:195] Run: crio --version
	I1101 09:21:23.572089  262357 ssh_runner.go:195] Run: crio --version
	I1101 09:21:23.609461  262357 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1101 09:21:21.584988  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	W1101 09:21:24.079457  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	I1101 09:21:19.847224  216020 cri.go:89] found id: ""
	I1101 09:21:19.847252  216020 logs.go:282] 0 containers: []
	W1101 09:21:19.847262  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:19.847268  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:19.847328  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:19.882017  216020 cri.go:89] found id: ""
	I1101 09:21:19.882045  216020 logs.go:282] 0 containers: []
	W1101 09:21:19.882056  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:19.882076  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:19.882090  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:19.925584  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:19.925618  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:20.064928  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:20.064965  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:20.156656  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:20.156694  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:20.156710  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:20.207508  216020 logs.go:123] Gathering logs for kube-controller-manager [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd] ...
	I1101 09:21:20.207557  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:20.248951  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:20.248986  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:20.271836  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:20.271902  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:20.353127  216020 logs.go:123] Gathering logs for kube-controller-manager [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867] ...
	I1101 09:21:20.353172  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:20.387495  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:20.387535  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:22.975635  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:22.976133  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:22.976185  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:22.976240  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:23.007894  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:23.007922  216020 cri.go:89] found id: ""
	I1101 09:21:23.007932  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:23.008067  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:23.012581  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:23.012653  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:23.041211  216020 cri.go:89] found id: ""
	I1101 09:21:23.041238  216020 logs.go:282] 0 containers: []
	W1101 09:21:23.041247  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:23.041253  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:23.041303  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:23.074549  216020 cri.go:89] found id: ""
	I1101 09:21:23.074576  216020 logs.go:282] 0 containers: []
	W1101 09:21:23.074587  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:23.074595  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:23.074651  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:23.104257  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:23.104283  216020 cri.go:89] found id: ""
	I1101 09:21:23.104307  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:23.104368  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:23.108627  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:23.108696  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:23.138311  216020 cri.go:89] found id: ""
	I1101 09:21:23.138335  216020 logs.go:282] 0 containers: []
	W1101 09:21:23.138343  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:23.138349  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:23.138403  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:23.166959  216020 cri.go:89] found id: "b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:23.166980  216020 cri.go:89] found id: "df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:23.166986  216020 cri.go:89] found id: ""
	I1101 09:21:23.167002  216020 logs.go:282] 2 containers: [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd]
	I1101 09:21:23.167062  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:23.171247  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:23.175370  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:23.175428  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:23.205749  216020 cri.go:89] found id: ""
	I1101 09:21:23.205776  216020 logs.go:282] 0 containers: []
	W1101 09:21:23.205787  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:23.205795  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:23.205846  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:23.234046  216020 cri.go:89] found id: ""
	I1101 09:21:23.234070  216020 logs.go:282] 0 containers: []
	W1101 09:21:23.234079  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:23.234095  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:23.234106  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:23.323713  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:23.323762  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:23.358473  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:23.358516  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:23.436851  216020 logs.go:123] Gathering logs for kube-controller-manager [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867] ...
	I1101 09:21:23.436901  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:23.468130  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:23.468165  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:23.489368  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:23.489413  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:23.567586  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:23.567611  216020 logs.go:123] Gathering logs for kube-controller-manager [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd] ...
	I1101 09:21:23.567627  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:23.599213  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:23.599241  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:23.679306  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:23.679340  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:23.611083  262357 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-648641 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:21:23.635605  262357 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1101 09:21:23.640214  262357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:21:23.651879  262357 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-648641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-648641 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:21:23.652020  262357 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:21:23.652078  262357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:21:23.693412  262357 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:21:23.693433  262357 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:21:23.693481  262357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:21:23.722392  262357 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:21:23.722418  262357 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:21:23.722430  262357 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1101 09:21:23.722539  262357 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-648641 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-648641 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:21:23.722663  262357 ssh_runner.go:195] Run: crio config
	I1101 09:21:23.772026  262357 cni.go:84] Creating CNI manager for ""
	I1101 09:21:23.772055  262357 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:21:23.772085  262357 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:21:23.772116  262357 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-648641 NodeName:default-k8s-diff-port-648641 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:21:23.772283  262357 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-648641"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:21:23.772357  262357 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:21:23.781719  262357 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:21:23.781807  262357 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:21:23.790849  262357 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1101 09:21:23.806683  262357 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:21:23.824637  262357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1101 09:21:23.840504  262357 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:21:23.845601  262357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:21:23.857551  262357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:21:23.965822  262357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:21:23.992771  262357 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641 for IP: 192.168.103.2
	I1101 09:21:23.992799  262357 certs.go:195] generating shared ca certs ...
	I1101 09:21:23.992821  262357 certs.go:227] acquiring lock for ca certs: {Name:mkfdee6a84670347521013ebeef165551380cb9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:23.993014  262357 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key
	I1101 09:21:23.993072  262357 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key
	I1101 09:21:23.993084  262357 certs.go:257] generating profile certs ...
	I1101 09:21:23.993150  262357 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/client.key
	I1101 09:21:23.993167  262357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/client.crt with IP's: []
	I1101 09:21:24.649794  262357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/client.crt ...
	I1101 09:21:24.649827  262357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/client.crt: {Name:mk48d159ec661a892ecb6482cee9b66b0b9ea0cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:24.650037  262357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/client.key ...
	I1101 09:21:24.650053  262357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/client.key: {Name:mk9f79181b551fb98ff4a9e4e23d7afc8657fc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:24.650183  262357 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.key.7ba7d8ea
	I1101 09:21:24.650200  262357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.crt.7ba7d8ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1101 09:21:25.253567  262357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.crt.7ba7d8ea ...
	I1101 09:21:25.253596  262357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.crt.7ba7d8ea: {Name:mk4398b189fdd3bab322efcd074e4028b1144897 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:25.253792  262357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.key.7ba7d8ea ...
	I1101 09:21:25.253811  262357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.key.7ba7d8ea: {Name:mk97b095c18c480b9b17921ac02ed3850338c147 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:25.253948  262357 certs.go:382] copying /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.crt.7ba7d8ea -> /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.crt
	I1101 09:21:25.254066  262357 certs.go:386] copying /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.key.7ba7d8ea -> /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.key
	I1101 09:21:25.254162  262357 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/proxy-client.key
	I1101 09:21:25.254181  262357 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/proxy-client.crt with IP's: []
	I1101 09:21:25.601921  262357 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/proxy-client.crt ...
	I1101 09:21:25.601948  262357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/proxy-client.crt: {Name:mk5eefec9a7f08e903ffd816191f14fd7bac2543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:25.602107  262357 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/proxy-client.key ...
	I1101 09:21:25.602121  262357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/proxy-client.key: {Name:mk91946f6a52fc49fa1ca52a724a3d3ae7a3f56f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:25.602287  262357 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem (1338 bytes)
	W1101 09:21:25.602319  262357 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414_empty.pem, impossibly tiny 0 bytes
	I1101 09:21:25.602329  262357 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:21:25.602352  262357 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:21:25.602380  262357 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:21:25.602401  262357 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem (1675 bytes)
	I1101 09:21:25.602447  262357 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:21:25.603018  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:21:25.622661  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:21:25.641942  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:21:25.661192  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:21:25.682676  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 09:21:25.703490  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:21:25.723930  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:21:25.743020  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/default-k8s-diff-port-648641/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:21:25.763306  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:21:25.786852  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem --> /usr/share/ca-certificates/9414.pem (1338 bytes)
	I1101 09:21:25.806545  262357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /usr/share/ca-certificates/94142.pem (1708 bytes)
	I1101 09:21:25.825310  262357 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:21:25.838925  262357 ssh_runner.go:195] Run: openssl version
	I1101 09:21:25.845240  262357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:21:25.854365  262357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:21:25.858427  262357 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:21:25.858489  262357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:21:25.894021  262357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:21:25.903233  262357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9414.pem && ln -fs /usr/share/ca-certificates/9414.pem /etc/ssl/certs/9414.pem"
	I1101 09:21:25.912055  262357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9414.pem
	I1101 09:21:25.916046  262357 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:35 /usr/share/ca-certificates/9414.pem
	I1101 09:21:25.916107  262357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9414.pem
	I1101 09:21:25.952115  262357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9414.pem /etc/ssl/certs/51391683.0"
	I1101 09:21:25.961907  262357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94142.pem && ln -fs /usr/share/ca-certificates/94142.pem /etc/ssl/certs/94142.pem"
	I1101 09:21:25.971569  262357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94142.pem
	I1101 09:21:25.976077  262357 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:35 /usr/share/ca-certificates/94142.pem
	I1101 09:21:25.976143  262357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94142.pem
	I1101 09:21:26.014698  262357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94142.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:21:26.024531  262357 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:21:26.028718  262357 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:21:26.028790  262357 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-648641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-648641 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:21:26.028908  262357 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:21:26.028983  262357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:21:26.058470  262357 cri.go:89] found id: ""
	I1101 09:21:26.058543  262357 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:21:26.067388  262357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:21:26.075783  262357 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:21:26.075839  262357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:21:26.084220  262357 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:21:26.084240  262357 kubeadm.go:158] found existing configuration files:
	
	I1101 09:21:26.084284  262357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1101 09:21:26.092406  262357 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:21:26.092466  262357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:21:26.100628  262357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1101 09:21:26.109119  262357 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:21:26.109195  262357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:21:26.120789  262357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1101 09:21:26.129968  262357 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:21:26.130023  262357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:21:26.137851  262357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1101 09:21:26.145759  262357 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:21:26.145812  262357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:21:26.153433  262357 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:21:23.377195  263568 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-340756:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.805630806s)
	I1101 09:21:23.377232  263568 kic.go:203] duration metric: took 5.805816487s to extract preloaded images to volume ...
	W1101 09:21:23.377352  263568 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 09:21:23.377395  263568 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 09:21:23.377444  263568 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:21:23.460854  263568 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-340756 --name newest-cni-340756 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-340756 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-340756 --network newest-cni-340756 --ip 192.168.94.2 --volume newest-cni-340756:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:21:23.784029  263568 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Running}}
	I1101 09:21:23.805818  263568 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Status}}
	I1101 09:21:23.826649  263568 cli_runner.go:164] Run: docker exec newest-cni-340756 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:21:23.877209  263568 oci.go:144] the created container "newest-cni-340756" has a running status.
	I1101 09:21:23.877242  263568 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa...
	I1101 09:21:24.024586  263568 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:21:24.059584  263568 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Status}}
	I1101 09:21:24.086789  263568 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:21:24.086828  263568 kic_runner.go:114] Args: [docker exec --privileged newest-cni-340756 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:21:24.140069  263568 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Status}}
	I1101 09:21:24.166009  263568 machine.go:94] provisionDockerMachine start ...
	I1101 09:21:24.166103  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:24.187117  263568 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:24.187461  263568 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1101 09:21:24.187485  263568 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:21:24.340065  263568 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-340756
	
	I1101 09:21:24.340091  263568 ubuntu.go:182] provisioning hostname "newest-cni-340756"
	I1101 09:21:24.340152  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:24.360729  263568 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:24.361089  263568 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1101 09:21:24.361115  263568 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-340756 && echo "newest-cni-340756" | sudo tee /etc/hostname
	I1101 09:21:24.516930  263568 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-340756
	
	I1101 09:21:24.517026  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:24.538109  263568 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:24.538337  263568 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1101 09:21:24.538357  263568 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-340756' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-340756/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-340756' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:21:24.683297  263568 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:21:24.683332  263568 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5913/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5913/.minikube}
	I1101 09:21:24.683401  263568 ubuntu.go:190] setting up certificates
	I1101 09:21:24.683417  263568 provision.go:84] configureAuth start
	I1101 09:21:24.683487  263568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-340756
	I1101 09:21:24.703460  263568 provision.go:143] copyHostCerts
	I1101 09:21:24.703533  263568 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem, removing ...
	I1101 09:21:24.703549  263568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem
	I1101 09:21:24.703639  263568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem (1078 bytes)
	I1101 09:21:24.703787  263568 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem, removing ...
	I1101 09:21:24.703801  263568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem
	I1101 09:21:24.703847  263568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem (1123 bytes)
	I1101 09:21:24.703974  263568 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem, removing ...
	I1101 09:21:24.703987  263568 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem
	I1101 09:21:24.704026  263568 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem (1675 bytes)
	I1101 09:21:24.704120  263568 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem org=jenkins.newest-cni-340756 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-340756]
	I1101 09:21:24.968401  263568 provision.go:177] copyRemoteCerts
	I1101 09:21:24.968462  263568 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:21:24.968516  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:24.987661  263568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:21:25.089711  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:21:25.110667  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 09:21:25.129152  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:21:25.147498  263568 provision.go:87] duration metric: took 464.066493ms to configureAuth
	I1101 09:21:25.147532  263568 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:21:25.147731  263568 config.go:182] Loaded profile config "newest-cni-340756": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:25.147837  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:25.169430  263568 main.go:143] libmachine: Using SSH client type: native
	I1101 09:21:25.169701  263568 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1101 09:21:25.169738  263568 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:21:25.435553  263568 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:21:25.435579  263568 machine.go:97] duration metric: took 1.269545706s to provisionDockerMachine
	I1101 09:21:25.435591  263568 client.go:176] duration metric: took 8.594473608s to LocalClient.Create
	I1101 09:21:25.435611  263568 start.go:167] duration metric: took 8.594560498s to libmachine.API.Create "newest-cni-340756"
	I1101 09:21:25.435620  263568 start.go:293] postStartSetup for "newest-cni-340756" (driver="docker")
	I1101 09:21:25.435633  263568 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:21:25.435699  263568 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:21:25.435752  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:25.455448  263568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:21:25.559491  263568 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:21:25.563355  263568 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:21:25.563393  263568 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:21:25.563406  263568 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/addons for local assets ...
	I1101 09:21:25.563474  263568 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/files for local assets ...
	I1101 09:21:25.563574  263568 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem -> 94142.pem in /etc/ssl/certs
	I1101 09:21:25.563698  263568 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:21:25.573078  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:21:25.594255  263568 start.go:296] duration metric: took 158.619429ms for postStartSetup
	I1101 09:21:25.594641  263568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-340756
	I1101 09:21:25.613714  263568 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/config.json ...
	I1101 09:21:25.614000  263568 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:21:25.614044  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:25.633601  263568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:21:25.732786  263568 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:21:25.737820  263568 start.go:128] duration metric: took 8.899363887s to createHost
	I1101 09:21:25.737848  263568 start.go:83] releasing machines lock for "newest-cni-340756", held for 8.899505816s
	I1101 09:21:25.737960  263568 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-340756
	I1101 09:21:25.757417  263568 ssh_runner.go:195] Run: cat /version.json
	I1101 09:21:25.757447  263568 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:21:25.757471  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:25.757513  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:25.778247  263568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:21:25.779020  263568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:21:25.932148  263568 ssh_runner.go:195] Run: systemctl --version
	I1101 09:21:25.938684  263568 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:21:25.975047  263568 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:21:25.979859  263568 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:21:25.979953  263568 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:21:26.007501  263568 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:21:26.007525  263568 start.go:496] detecting cgroup driver to use...
	I1101 09:21:26.007559  263568 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:21:26.007611  263568 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:21:26.025281  263568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:21:26.038138  263568 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:21:26.038192  263568 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:21:26.057086  263568 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:21:26.076252  263568 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:21:26.171183  263568 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:21:26.283510  263568 docker.go:234] disabling docker service ...
	I1101 09:21:26.283575  263568 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:21:26.309027  263568 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:21:26.324709  263568 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:21:26.426167  263568 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:21:26.527590  263568 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:21:26.541712  263568 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:21:26.557160  263568 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:21:26.557225  263568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:26.569927  263568 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:21:26.569990  263568 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:26.579394  263568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:26.588657  263568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:26.598019  263568 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:21:26.607343  263568 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:26.616850  263568 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:26.634587  263568 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:21:26.644446  263568 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:21:26.652632  263568 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:21:26.660828  263568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:21:26.753353  263568 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:21:26.868227  263568 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:21:26.868306  263568 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:21:26.872471  263568 start.go:564] Will wait 60s for crictl version
	I1101 09:21:26.872539  263568 ssh_runner.go:195] Run: which crictl
	I1101 09:21:26.876284  263568 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:21:26.904294  263568 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:21:26.904386  263568 ssh_runner.go:195] Run: crio --version
	I1101 09:21:26.934931  263568 ssh_runner.go:195] Run: crio --version
	I1101 09:21:26.967976  263568 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:21:26.969225  263568 cli_runner.go:164] Run: docker network inspect newest-cni-340756 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:21:26.987594  263568 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1101 09:21:26.991814  263568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:21:27.004546  263568 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1101 09:21:26.569457  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	W1101 09:21:29.068406  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	I1101 09:21:26.218002  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:26.218463  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:26.218516  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:26.218571  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:26.248555  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:26.248581  216020 cri.go:89] found id: ""
	I1101 09:21:26.248590  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:26.248653  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:26.252797  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:26.252878  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:26.283445  216020 cri.go:89] found id: ""
	I1101 09:21:26.283469  216020 logs.go:282] 0 containers: []
	W1101 09:21:26.283479  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:26.283486  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:26.283545  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:26.316521  216020 cri.go:89] found id: ""
	I1101 09:21:26.316551  216020 logs.go:282] 0 containers: []
	W1101 09:21:26.316562  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:26.316570  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:26.316633  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:26.346452  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:26.346479  216020 cri.go:89] found id: ""
	I1101 09:21:26.346486  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:26.346535  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:26.350481  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:26.350546  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:26.385585  216020 cri.go:89] found id: ""
	I1101 09:21:26.385616  216020 logs.go:282] 0 containers: []
	W1101 09:21:26.385626  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:26.385635  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:26.385690  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:26.416425  216020 cri.go:89] found id: "b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:26.416451  216020 cri.go:89] found id: "df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:26.416455  216020 cri.go:89] found id: ""
	I1101 09:21:26.416463  216020 logs.go:282] 2 containers: [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd]
	I1101 09:21:26.416519  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:26.421223  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:26.425693  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:26.425771  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:26.459495  216020 cri.go:89] found id: ""
	I1101 09:21:26.459525  216020 logs.go:282] 0 containers: []
	W1101 09:21:26.459535  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:26.459543  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:26.459606  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:26.498235  216020 cri.go:89] found id: ""
	I1101 09:21:26.498264  216020 logs.go:282] 0 containers: []
	W1101 09:21:26.498275  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:26.498292  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:26.498307  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:26.516258  216020 logs.go:123] Gathering logs for kube-controller-manager [df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd] ...
	I1101 09:21:26.516305  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df274c74effda434b9fae196ccc17a5c3a3d2b7872fbe77a79ff7e34adc6b3cd"
	I1101 09:21:26.547393  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:26.547419  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:26.605955  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:26.605993  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:26.712155  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:26.712196  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:26.777771  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:26.777797  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:26.777815  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:26.817033  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:26.817067  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:26.878664  216020 logs.go:123] Gathering logs for kube-controller-manager [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867] ...
	I1101 09:21:26.878701  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:26.909010  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:26.909045  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:29.443287  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:29.443773  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:29.443836  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:29.443912  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:29.474362  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:29.474391  216020 cri.go:89] found id: ""
	I1101 09:21:29.474403  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:29.474465  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:29.478946  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:29.479020  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:29.508009  216020 cri.go:89] found id: ""
	I1101 09:21:29.508034  216020 logs.go:282] 0 containers: []
	W1101 09:21:29.508046  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:29.508054  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:29.508106  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:29.538327  216020 cri.go:89] found id: ""
	I1101 09:21:29.538352  216020 logs.go:282] 0 containers: []
	W1101 09:21:29.538362  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:29.538369  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:29.538425  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:29.567717  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:29.567742  216020 cri.go:89] found id: ""
	I1101 09:21:29.567750  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:29.567817  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:29.572414  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:29.572483  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:29.605199  216020 cri.go:89] found id: ""
	I1101 09:21:29.605227  216020 logs.go:282] 0 containers: []
	W1101 09:21:29.605238  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:29.605244  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:29.605313  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:29.637360  216020 cri.go:89] found id: "b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:29.637387  216020 cri.go:89] found id: ""
	I1101 09:21:29.637397  216020 logs.go:282] 1 containers: [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867]
	I1101 09:21:29.637456  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:29.642045  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:29.642113  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:29.673568  216020 cri.go:89] found id: ""
	I1101 09:21:29.673593  216020 logs.go:282] 0 containers: []
	W1101 09:21:29.673604  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:29.673612  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:29.673670  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:29.706496  216020 cri.go:89] found id: ""
	I1101 09:21:29.706522  216020 logs.go:282] 0 containers: []
	W1101 09:21:29.706529  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:29.706538  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:29.706555  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:29.740598  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:29.740627  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:29.809231  216020 logs.go:123] Gathering logs for kube-controller-manager [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867] ...
	I1101 09:21:29.809269  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:26.220643  262357 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 09:21:26.299212  262357 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:21:27.005755  263568 kubeadm.go:884] updating cluster {Name:newest-cni-340756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-340756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:21:27.005934  263568 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:21:27.006015  263568 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:21:27.038856  263568 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:21:27.038908  263568 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:21:27.038962  263568 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:21:27.067077  263568 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:21:27.067097  263568 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:21:27.067104  263568 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1101 09:21:27.067207  263568 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-340756 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-340756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:21:27.067302  263568 ssh_runner.go:195] Run: crio config
	I1101 09:21:27.115846  263568 cni.go:84] Creating CNI manager for ""
	I1101 09:21:27.115889  263568 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:21:27.115917  263568 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 09:21:27.115941  263568 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-340756 NodeName:newest-cni-340756 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:21:27.116109  263568 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-340756"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:21:27.116184  263568 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:21:27.124904  263568 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:21:27.124986  263568 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:21:27.133615  263568 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 09:21:27.148747  263568 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:21:27.168585  263568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1101 09:21:27.183947  263568 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:21:27.188114  263568 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:21:27.200232  263568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:21:27.286309  263568 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:21:27.314555  263568 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756 for IP: 192.168.94.2
	I1101 09:21:27.314579  263568 certs.go:195] generating shared ca certs ...
	I1101 09:21:27.314600  263568 certs.go:227] acquiring lock for ca certs: {Name:mkfdee6a84670347521013ebeef165551380cb9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:27.314763  263568 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key
	I1101 09:21:27.314804  263568 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key
	I1101 09:21:27.314813  263568 certs.go:257] generating profile certs ...
	I1101 09:21:27.314880  263568 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/client.key
	I1101 09:21:27.314901  263568 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/client.crt with IP's: []
	I1101 09:21:27.692177  263568 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/client.crt ...
	I1101 09:21:27.692209  263568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/client.crt: {Name:mkbb4ae05d45ea00cbc1fad0c09f2509b5385f8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:27.692409  263568 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/client.key ...
	I1101 09:21:27.692445  263568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/client.key: {Name:mk586a11d387791617c3fc6e5017b434d57db019 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:27.692549  263568 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.key.b81bb48a
	I1101 09:21:27.692564  263568 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.crt.b81bb48a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1101 09:21:27.755373  263568 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.crt.b81bb48a ...
	I1101 09:21:27.755407  263568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.crt.b81bb48a: {Name:mk6e0c5df36bbbcbb489321948b8dc7e48e0d551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:27.755593  263568 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.key.b81bb48a ...
	I1101 09:21:27.755606  263568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.key.b81bb48a: {Name:mkfbcb52bf51885ea6244fdb7f88dfae8b653a3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:27.755676  263568 certs.go:382] copying /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.crt.b81bb48a -> /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.crt
	I1101 09:21:27.755761  263568 certs.go:386] copying /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.key.b81bb48a -> /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.key
	I1101 09:21:27.755853  263568 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/proxy-client.key
	I1101 09:21:27.755884  263568 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/proxy-client.crt with IP's: []
	I1101 09:21:27.983684  263568 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/proxy-client.crt ...
	I1101 09:21:27.983715  263568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/proxy-client.crt: {Name:mk2a01f0c56ea811a822018dd77d41193ca99202 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:27.983905  263568 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/proxy-client.key ...
	I1101 09:21:27.983920  263568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/proxy-client.key: {Name:mk7a924dbb62fa5d4646b90d42dbf16e12de6201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:27.984087  263568 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem (1338 bytes)
	W1101 09:21:27.984124  263568 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414_empty.pem, impossibly tiny 0 bytes
	I1101 09:21:27.984139  263568 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:21:27.984170  263568 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:21:27.984203  263568 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:21:27.984232  263568 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem (1675 bytes)
	I1101 09:21:27.984273  263568 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:21:27.984823  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:21:28.003673  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:21:28.022550  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:21:28.041261  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:21:28.060164  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 09:21:28.079743  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:21:28.098558  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:21:28.117407  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:21:28.136073  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:21:28.157077  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem --> /usr/share/ca-certificates/9414.pem (1338 bytes)
	I1101 09:21:28.177850  263568 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /usr/share/ca-certificates/94142.pem (1708 bytes)
	I1101 09:21:28.198397  263568 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:21:28.212819  263568 ssh_runner.go:195] Run: openssl version
	I1101 09:21:28.219792  263568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:21:28.228932  263568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:21:28.233066  263568 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:21:28.233136  263568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:21:28.268458  263568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:21:28.277690  263568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9414.pem && ln -fs /usr/share/ca-certificates/9414.pem /etc/ssl/certs/9414.pem"
	I1101 09:21:28.286876  263568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9414.pem
	I1101 09:21:28.291269  263568 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:35 /usr/share/ca-certificates/9414.pem
	I1101 09:21:28.291329  263568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9414.pem
	I1101 09:21:28.325538  263568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9414.pem /etc/ssl/certs/51391683.0"
	I1101 09:21:28.334805  263568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94142.pem && ln -fs /usr/share/ca-certificates/94142.pem /etc/ssl/certs/94142.pem"
	I1101 09:21:28.343980  263568 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94142.pem
	I1101 09:21:28.348001  263568 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:35 /usr/share/ca-certificates/94142.pem
	I1101 09:21:28.348063  263568 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94142.pem
	I1101 09:21:28.382381  263568 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94142.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:21:28.391727  263568 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:21:28.395966  263568 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:21:28.396022  263568 kubeadm.go:401] StartCluster: {Name:newest-cni-340756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-340756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:21:28.396112  263568 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:21:28.396181  263568 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:21:28.427389  263568 cri.go:89] found id: ""
	I1101 09:21:28.427460  263568 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:21:28.436396  263568 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:21:28.445034  263568 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:21:28.445096  263568 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:21:28.453536  263568 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:21:28.453555  263568 kubeadm.go:158] found existing configuration files:
	
	I1101 09:21:28.453599  263568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:21:28.462257  263568 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:21:28.462315  263568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:21:28.470478  263568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:21:28.479045  263568 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:21:28.479118  263568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:21:28.487232  263568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:21:28.495250  263568 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:21:28.495317  263568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:21:28.503323  263568 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:21:28.513024  263568 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:21:28.513166  263568 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:21:28.522016  263568 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:21:28.587302  263568 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 09:21:28.653705  263568 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1101 09:21:31.069175  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	W1101 09:21:33.070999  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	I1101 09:21:29.839290  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:29.839318  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:29.895909  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:29.895943  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:29.929482  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:29.929510  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:30.037822  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:30.037874  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:30.054657  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:30.054690  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:30.122222  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:32.622956  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:32.623463  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:32.623527  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:32.623583  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:32.663084  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:32.663130  216020 cri.go:89] found id: ""
	I1101 09:21:32.663146  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:32.663225  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:32.668663  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:32.668752  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:32.705016  216020 cri.go:89] found id: ""
	I1101 09:21:32.705050  216020 logs.go:282] 0 containers: []
	W1101 09:21:32.705062  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:32.705069  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:32.705128  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:32.742963  216020 cri.go:89] found id: ""
	I1101 09:21:32.742994  216020 logs.go:282] 0 containers: []
	W1101 09:21:32.743004  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:32.743012  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:32.743072  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:32.776827  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:32.776856  216020 cri.go:89] found id: ""
	I1101 09:21:32.776894  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:32.776950  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:32.782337  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:32.782413  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:32.821619  216020 cri.go:89] found id: ""
	I1101 09:21:32.821644  216020 logs.go:282] 0 containers: []
	W1101 09:21:32.821654  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:32.821661  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:32.821725  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:32.856480  216020 cri.go:89] found id: "b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:32.856504  216020 cri.go:89] found id: ""
	I1101 09:21:32.856514  216020 logs.go:282] 1 containers: [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867]
	I1101 09:21:32.856569  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:32.861729  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:32.861808  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:32.895569  216020 cri.go:89] found id: ""
	I1101 09:21:32.895601  216020 logs.go:282] 0 containers: []
	W1101 09:21:32.895612  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:32.895621  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:32.895703  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:32.932306  216020 cri.go:89] found id: ""
	I1101 09:21:32.932331  216020 logs.go:282] 0 containers: []
	W1101 09:21:32.932342  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:32.932353  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:32.932368  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:32.997846  216020 logs.go:123] Gathering logs for kube-controller-manager [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867] ...
	I1101 09:21:32.997892  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:33.036043  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:33.036086  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:33.123414  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:33.123453  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:33.169800  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:33.169831  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:33.295230  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:33.295267  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:33.313406  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:33.313451  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:33.393782  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:33.393809  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:33.393825  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:37.173559  262357 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:21:37.173618  262357 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:21:37.173719  262357 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:21:37.173805  262357 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 09:21:37.173907  262357 kubeadm.go:319] OS: Linux
	I1101 09:21:37.173993  262357 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:21:37.174063  262357 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:21:37.174128  262357 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:21:37.174195  262357 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:21:37.174262  262357 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:21:37.174353  262357 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:21:37.174424  262357 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:21:37.174488  262357 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 09:21:37.174628  262357 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:21:37.174731  262357 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:21:37.174826  262357 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:21:37.174991  262357 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:21:37.176483  262357 out.go:252]   - Generating certificates and keys ...
	I1101 09:21:37.176614  262357 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:21:37.176733  262357 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:21:37.176843  262357 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:21:37.176981  262357 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:21:37.177075  262357 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:21:37.177161  262357 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:21:37.177231  262357 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:21:37.177406  262357 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-648641 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1101 09:21:37.177469  262357 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:21:37.177621  262357 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-648641 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1101 09:21:37.177724  262357 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:21:37.177823  262357 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:21:37.177897  262357 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:21:37.177981  262357 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:21:37.178058  262357 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:21:37.178133  262357 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:21:37.178209  262357 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:21:37.178317  262357 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:21:37.178416  262357 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:21:37.178535  262357 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:21:37.178677  262357 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:21:37.181296  262357 out.go:252]   - Booting up control plane ...
	I1101 09:21:37.181439  262357 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:21:37.181577  262357 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:21:37.181697  262357 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:21:37.181814  262357 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:21:37.181979  262357 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:21:37.182130  262357 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:21:37.182217  262357 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:21:37.182256  262357 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:21:37.182373  262357 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:21:37.182503  262357 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:21:37.182581  262357 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.981198ms
	I1101 09:21:37.182667  262357 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:21:37.182801  262357 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1101 09:21:37.182953  262357 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:21:37.183074  262357 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:21:37.183193  262357 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.614305294s
	I1101 09:21:37.183252  262357 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.436782156s
	I1101 09:21:37.183327  262357 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.503947493s
	I1101 09:21:37.183453  262357 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:21:37.183586  262357 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:21:37.183656  262357 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:21:37.183966  262357 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-648641 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:21:37.184062  262357 kubeadm.go:319] [bootstrap-token] Using token: dttn2w.a8ocz5il4ubw84p7
	I1101 09:21:37.185353  262357 out.go:252]   - Configuring RBAC rules ...
	I1101 09:21:37.185480  262357 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:21:37.185609  262357 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:21:37.185808  262357 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:21:37.185999  262357 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:21:37.186170  262357 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:21:37.186311  262357 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:21:37.186468  262357 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:21:37.186544  262357 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:21:37.186626  262357 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:21:37.186642  262357 kubeadm.go:319] 
	I1101 09:21:37.186717  262357 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:21:37.186727  262357 kubeadm.go:319] 
	I1101 09:21:37.186809  262357 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:21:37.186818  262357 kubeadm.go:319] 
	I1101 09:21:37.186840  262357 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:21:37.186951  262357 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:21:37.187017  262357 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:21:37.187028  262357 kubeadm.go:319] 
	I1101 09:21:37.187101  262357 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:21:37.187110  262357 kubeadm.go:319] 
	I1101 09:21:37.187180  262357 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:21:37.187188  262357 kubeadm.go:319] 
	I1101 09:21:37.187264  262357 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:21:37.187367  262357 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:21:37.187471  262357 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:21:37.187488  262357 kubeadm.go:319] 
	I1101 09:21:37.187599  262357 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:21:37.187723  262357 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:21:37.187741  262357 kubeadm.go:319] 
	I1101 09:21:37.187836  262357 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token dttn2w.a8ocz5il4ubw84p7 \
	I1101 09:21:37.187993  262357 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 \
	I1101 09:21:37.188042  262357 kubeadm.go:319] 	--control-plane 
	I1101 09:21:37.188083  262357 kubeadm.go:319] 
	I1101 09:21:37.188203  262357 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:21:37.188214  262357 kubeadm.go:319] 
	I1101 09:21:37.188320  262357 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token dttn2w.a8ocz5il4ubw84p7 \
	I1101 09:21:37.188479  262357 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 
	I1101 09:21:37.188490  262357 cni.go:84] Creating CNI manager for ""
	I1101 09:21:37.188499  262357 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:21:37.190119  262357 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:21:39.072652  263568 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:21:39.072740  263568 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:21:39.072909  263568 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:21:39.073018  263568 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 09:21:39.073091  263568 kubeadm.go:319] OS: Linux
	I1101 09:21:39.073179  263568 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:21:39.073252  263568 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:21:39.073324  263568 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:21:39.073391  263568 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:21:39.073461  263568 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:21:39.073536  263568 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:21:39.073614  263568 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:21:39.073685  263568 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 09:21:39.073792  263568 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:21:39.073954  263568 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:21:39.074092  263568 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:21:39.074196  263568 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:21:39.076838  263568 out.go:252]   - Generating certificates and keys ...
	I1101 09:21:39.076956  263568 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:21:39.077068  263568 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:21:39.077160  263568 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:21:39.077218  263568 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:21:39.077301  263568 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:21:39.077350  263568 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:21:39.077400  263568 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:21:39.077516  263568 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-340756] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1101 09:21:39.077563  263568 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:21:39.077727  263568 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-340756] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1101 09:21:39.077833  263568 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:21:39.077926  263568 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:21:39.078002  263568 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:21:39.078060  263568 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:21:39.078104  263568 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:21:39.078164  263568 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:21:39.078242  263568 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:21:39.078357  263568 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:21:39.078407  263568 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:21:39.078476  263568 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:21:39.078534  263568 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:21:39.079883  263568 out.go:252]   - Booting up control plane ...
	I1101 09:21:39.079979  263568 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:21:39.080072  263568 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:21:39.080158  263568 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:21:39.080248  263568 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:21:39.080321  263568 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:21:39.080410  263568 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:21:39.080487  263568 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:21:39.080520  263568 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:21:39.080621  263568 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:21:39.080708  263568 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:21:39.080762  263568 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.756426ms
	I1101 09:21:39.080849  263568 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:21:39.080954  263568 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1101 09:21:39.081027  263568 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:21:39.081139  263568 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:21:39.081209  263568 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.622492073s
	I1101 09:21:39.081276  263568 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.035370604s
	I1101 09:21:39.081342  263568 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001446602s
	I1101 09:21:39.081478  263568 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:21:39.081608  263568 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:21:39.081696  263568 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:21:39.081968  263568 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-340756 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:21:39.082027  263568 kubeadm.go:319] [bootstrap-token] Using token: tyli2s.xsn3fo4xtejuilp0
	I1101 09:21:39.083715  263568 out.go:252]   - Configuring RBAC rules ...
	I1101 09:21:39.083857  263568 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:21:39.084000  263568 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:21:39.084180  263568 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:21:39.084305  263568 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:21:39.084396  263568 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:21:39.084465  263568 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:21:39.084563  263568 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:21:39.084630  263568 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:21:39.084704  263568 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:21:39.084713  263568 kubeadm.go:319] 
	I1101 09:21:39.084803  263568 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:21:39.084812  263568 kubeadm.go:319] 
	I1101 09:21:39.084946  263568 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:21:39.084956  263568 kubeadm.go:319] 
	I1101 09:21:39.084996  263568 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:21:39.085106  263568 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:21:39.085202  263568 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:21:39.085211  263568 kubeadm.go:319] 
	I1101 09:21:39.085305  263568 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:21:39.085324  263568 kubeadm.go:319] 
	I1101 09:21:39.085398  263568 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:21:39.085416  263568 kubeadm.go:319] 
	I1101 09:21:39.085474  263568 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:21:39.085544  263568 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:21:39.085631  263568 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:21:39.085639  263568 kubeadm.go:319] 
	I1101 09:21:39.085752  263568 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:21:39.085858  263568 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:21:39.085878  263568 kubeadm.go:319] 
	I1101 09:21:39.085987  263568 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tyli2s.xsn3fo4xtejuilp0 \
	I1101 09:21:39.086079  263568 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 \
	I1101 09:21:39.086098  263568 kubeadm.go:319] 	--control-plane 
	I1101 09:21:39.086103  263568 kubeadm.go:319] 
	I1101 09:21:39.086168  263568 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:21:39.086174  263568 kubeadm.go:319] 
	I1101 09:21:39.086241  263568 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tyli2s.xsn3fo4xtejuilp0 \
	I1101 09:21:39.086362  263568 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 
	I1101 09:21:39.086377  263568 cni.go:84] Creating CNI manager for ""
	I1101 09:21:39.086383  263568 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:21:39.088895  263568 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1101 09:21:35.571965  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	W1101 09:21:38.071647  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	I1101 09:21:35.933884  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:35.934303  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:35.934356  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:35.934407  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:35.962382  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:35.962409  216020 cri.go:89] found id: ""
	I1101 09:21:35.962432  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:35.962493  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:35.967486  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:35.967566  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:35.995175  216020 cri.go:89] found id: ""
	I1101 09:21:35.995205  216020 logs.go:282] 0 containers: []
	W1101 09:21:35.995215  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:35.995223  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:35.995277  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:36.023392  216020 cri.go:89] found id: ""
	I1101 09:21:36.023425  216020 logs.go:282] 0 containers: []
	W1101 09:21:36.023435  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:36.023442  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:36.023495  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:36.051769  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:36.051791  216020 cri.go:89] found id: ""
	I1101 09:21:36.051800  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:36.051879  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:36.055950  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:36.056021  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:36.084309  216020 cri.go:89] found id: ""
	I1101 09:21:36.084329  216020 logs.go:282] 0 containers: []
	W1101 09:21:36.084337  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:36.084343  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:36.084394  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:36.112936  216020 cri.go:89] found id: "b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:36.112967  216020 cri.go:89] found id: ""
	I1101 09:21:36.112978  216020 logs.go:282] 1 containers: [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867]
	I1101 09:21:36.113024  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:36.117134  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:36.117193  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:36.144389  216020 cri.go:89] found id: ""
	I1101 09:21:36.144419  216020 logs.go:282] 0 containers: []
	W1101 09:21:36.144432  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:36.144447  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:36.144507  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:36.174333  216020 cri.go:89] found id: ""
	I1101 09:21:36.174355  216020 logs.go:282] 0 containers: []
	W1101 09:21:36.174363  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:36.174372  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:36.174383  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:36.190991  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:36.191022  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:36.260051  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:36.260074  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:36.260090  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:36.298788  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:36.298821  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:36.362575  216020 logs.go:123] Gathering logs for kube-controller-manager [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867] ...
	I1101 09:21:36.362621  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:36.399166  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:36.399193  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:36.468028  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:36.468069  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:36.501911  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:36.501939  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:39.109967  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:39.110414  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:39.110469  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:39.110522  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:39.143264  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:39.143283  216020 cri.go:89] found id: ""
	I1101 09:21:39.143290  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:39.143342  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:39.147691  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:39.147771  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:39.178502  216020 cri.go:89] found id: ""
	I1101 09:21:39.178528  216020 logs.go:282] 0 containers: []
	W1101 09:21:39.178538  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:39.178545  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:39.178607  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:39.211410  216020 cri.go:89] found id: ""
	I1101 09:21:39.211440  216020 logs.go:282] 0 containers: []
	W1101 09:21:39.211450  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:39.211459  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:39.211521  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:39.246616  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:39.246645  216020 cri.go:89] found id: ""
	I1101 09:21:39.246655  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:39.246724  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:39.251059  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:39.251127  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:39.284617  216020 cri.go:89] found id: ""
	I1101 09:21:39.284652  216020 logs.go:282] 0 containers: []
	W1101 09:21:39.284664  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:39.284672  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:39.284740  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:39.319828  216020 cri.go:89] found id: "b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:39.319852  216020 cri.go:89] found id: ""
	I1101 09:21:39.319892  216020 logs.go:282] 1 containers: [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867]
	I1101 09:21:39.319956  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:39.325472  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:39.325548  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:39.368025  216020 cri.go:89] found id: ""
	I1101 09:21:39.368054  216020 logs.go:282] 0 containers: []
	W1101 09:21:39.368065  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:39.368074  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:39.368133  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:39.405733  216020 cri.go:89] found id: ""
	I1101 09:21:39.405771  216020 logs.go:282] 0 containers: []
	W1101 09:21:39.405783  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:39.405793  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:39.405814  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:39.465634  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:39.465668  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:39.525559  216020 logs.go:123] Gathering logs for kube-controller-manager [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867] ...
	I1101 09:21:39.525593  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:39.558313  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:39.558350  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:39.625965  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:39.626006  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:39.658059  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:39.658092  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:39.759174  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:39.759215  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:39.777214  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:39.777243  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1101 09:21:37.191433  262357 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:21:37.197163  262357 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:21:37.197191  262357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:21:37.214677  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:21:37.444545  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:37.444657  262357 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:21:37.445053  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-648641 minikube.k8s.io/updated_at=2025_11_01T09_21_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=default-k8s-diff-port-648641 minikube.k8s.io/primary=true
	I1101 09:21:37.529149  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:37.529164  262357 ops.go:34] apiserver oom_adj: -16
	I1101 09:21:38.029350  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:38.529475  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:39.029339  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:39.530028  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:40.029249  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:40.530053  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:41.029582  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:39.091032  263568 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:21:39.096443  263568 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:21:39.096465  263568 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:21:39.111385  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:21:39.374555  263568 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:21:39.374668  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:39.374704  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-340756 minikube.k8s.io/updated_at=2025_11_01T09_21_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=newest-cni-340756 minikube.k8s.io/primary=true
	I1101 09:21:39.494159  263568 ops.go:34] apiserver oom_adj: -16
	I1101 09:21:39.494196  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:39.994658  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:40.494262  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:40.994330  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:41.530066  262357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:41.604074  262357 kubeadm.go:1114] duration metric: took 4.159600393s to wait for elevateKubeSystemPrivileges
	I1101 09:21:41.604116  262357 kubeadm.go:403] duration metric: took 15.575329154s to StartCluster
	I1101 09:21:41.604136  262357 settings.go:142] acquiring lock: {Name:mkb1ba7d0d4bb15f3f0746ce486d72703f901580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:41.604215  262357 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:21:41.605814  262357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:41.606175  262357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:21:41.606192  262357 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:21:41.606275  262357 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:21:41.606366  262357 config.go:182] Loaded profile config "default-k8s-diff-port-648641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:41.606402  262357 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-648641"
	I1101 09:21:41.606422  262357 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-648641"
	I1101 09:21:41.606459  262357 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-648641"
	I1101 09:21:41.606432  262357 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-648641"
	I1101 09:21:41.606586  262357 host.go:66] Checking if "default-k8s-diff-port-648641" exists ...
	I1101 09:21:41.606969  262357 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-648641 --format={{.State.Status}}
	I1101 09:21:41.607153  262357 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-648641 --format={{.State.Status}}
	I1101 09:21:41.607935  262357 out.go:179] * Verifying Kubernetes components...
	I1101 09:21:41.609213  262357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:21:41.636030  262357 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:21:41.636379  262357 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-648641"
	I1101 09:21:41.636432  262357 host.go:66] Checking if "default-k8s-diff-port-648641" exists ...
	I1101 09:21:41.636825  262357 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-648641 --format={{.State.Status}}
	I1101 09:21:41.637464  262357 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:21:41.637489  262357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:21:41.637568  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:41.674180  262357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/default-k8s-diff-port-648641/id_rsa Username:docker}
	I1101 09:21:41.679112  262357 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:21:41.679137  262357 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:21:41.679211  262357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:21:41.703106  262357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/default-k8s-diff-port-648641/id_rsa Username:docker}
	I1101 09:21:41.742430  262357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:21:41.780409  262357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:21:41.810267  262357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:21:41.844165  262357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:21:41.949468  262357 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1101 09:21:41.950803  262357 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-648641" to be "Ready" ...
	I1101 09:21:42.179770  262357 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1101 09:21:40.572956  256247 pod_ready.go:104] pod "coredns-66bc5c9577-wwvth" is not "Ready", error: <nil>
	I1101 09:21:42.069428  256247 pod_ready.go:94] pod "coredns-66bc5c9577-wwvth" is "Ready"
	I1101 09:21:42.069460  256247 pod_ready.go:86] duration metric: took 31.506833683s for pod "coredns-66bc5c9577-wwvth" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:42.073169  256247 pod_ready.go:83] waiting for pod "etcd-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:42.079254  256247 pod_ready.go:94] pod "etcd-embed-certs-236314" is "Ready"
	I1101 09:21:42.079286  256247 pod_ready.go:86] duration metric: took 6.089876ms for pod "etcd-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:42.082501  256247 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:42.087688  256247 pod_ready.go:94] pod "kube-apiserver-embed-certs-236314" is "Ready"
	I1101 09:21:42.087717  256247 pod_ready.go:86] duration metric: took 5.189118ms for pod "kube-apiserver-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:42.091305  256247 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:42.266702  256247 pod_ready.go:94] pod "kube-controller-manager-embed-certs-236314" is "Ready"
	I1101 09:21:42.266737  256247 pod_ready.go:86] duration metric: took 175.350883ms for pod "kube-controller-manager-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:42.467790  256247 pod_ready.go:83] waiting for pod "kube-proxy-55ft8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:42.866637  256247 pod_ready.go:94] pod "kube-proxy-55ft8" is "Ready"
	I1101 09:21:42.866676  256247 pod_ready.go:86] duration metric: took 398.843584ms for pod "kube-proxy-55ft8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:43.067021  256247 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:43.466749  256247 pod_ready.go:94] pod "kube-scheduler-embed-certs-236314" is "Ready"
	I1101 09:21:43.466783  256247 pod_ready.go:86] duration metric: took 399.735748ms for pod "kube-scheduler-embed-certs-236314" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:43.466804  256247 pod_ready.go:40] duration metric: took 32.908124185s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:21:43.514845  256247 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:21:43.516754  256247 out.go:179] * Done! kubectl is now configured to use "embed-certs-236314" cluster and "default" namespace by default
	I1101 09:21:41.495296  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:41.994701  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:42.495064  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:42.994567  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:43.495241  263568 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:21:43.575951  263568 kubeadm.go:1114] duration metric: took 4.2013384s to wait for elevateKubeSystemPrivileges
	I1101 09:21:43.575987  263568 kubeadm.go:403] duration metric: took 15.179967762s to StartCluster
	I1101 09:21:43.576007  263568 settings.go:142] acquiring lock: {Name:mkb1ba7d0d4bb15f3f0746ce486d72703f901580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:43.576093  263568 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:21:43.578613  263568 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:21:43.578959  263568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:21:43.578971  263568 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:21:43.579034  263568 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:21:43.579211  263568 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-340756"
	I1101 09:21:43.579236  263568 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-340756"
	I1101 09:21:43.579254  263568 config.go:182] Loaded profile config "newest-cni-340756": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:43.579274  263568 host.go:66] Checking if "newest-cni-340756" exists ...
	I1101 09:21:43.579267  263568 addons.go:70] Setting default-storageclass=true in profile "newest-cni-340756"
	I1101 09:21:43.579381  263568 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-340756"
	I1101 09:21:43.579722  263568 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Status}}
	I1101 09:21:43.579808  263568 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Status}}
	I1101 09:21:43.581304  263568 out.go:179] * Verifying Kubernetes components...
	I1101 09:21:43.583140  263568 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:21:43.610121  263568 addons.go:239] Setting addon default-storageclass=true in "newest-cni-340756"
	I1101 09:21:43.610168  263568 host.go:66] Checking if "newest-cni-340756" exists ...
	I1101 09:21:43.610646  263568 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Status}}
	I1101 09:21:43.611306  263568 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:21:43.613411  263568 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:21:43.613437  263568 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:21:43.613499  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:43.657272  263568 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:21:43.657341  263568 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:21:43.657427  263568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:21:43.659518  263568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:21:43.684025  263568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:21:43.699595  263568 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:21:43.758185  263568 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:21:43.784453  263568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:21:43.805177  263568 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:21:43.909273  263568 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1101 09:21:43.912845  263568 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:21:43.913009  263568 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:21:44.129441  263568 api_server.go:72] duration metric: took 550.43073ms to wait for apiserver process to appear ...
	I1101 09:21:44.129468  263568 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:21:44.129489  263568 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 09:21:44.134537  263568 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1101 09:21:44.135389  263568 api_server.go:141] control plane version: v1.34.1
	I1101 09:21:44.135412  263568 api_server.go:131] duration metric: took 5.938114ms to wait for apiserver health ...
	I1101 09:21:44.135421  263568 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:21:44.136055  263568 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:21:44.136942  263568 addons.go:515] duration metric: took 557.903817ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:21:44.138990  263568 system_pods.go:59] 8 kube-system pods found
	I1101 09:21:44.139016  263568 system_pods.go:61] "coredns-66bc5c9577-tmnp2" [3dc7a625-aa33-404e-b8e1-4abff976bac9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:21:44.139023  263568 system_pods.go:61] "etcd-newest-cni-340756" [5ba122dc-81df-44c9-b993-82d2381dd60c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:21:44.139032  263568 system_pods.go:61] "kindnet-gjnst" [9c4e4a33-eff1-47ec-94bc-7f9196c547ff] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:21:44.139038  263568 system_pods.go:61] "kube-apiserver-newest-cni-340756" [fefc943a-a3b3-4069-9eed-d6a6815d3846] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:21:44.139046  263568 system_pods.go:61] "kube-controller-manager-newest-cni-340756" [f6823fe4-7c7e-4b04-8fbd-f52058100d5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:21:44.139051  263568 system_pods.go:61] "kube-proxy-wp2h9" [e6a908ac-4dfb-4f1c-8059-79695562a817] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:21:44.139056  263568 system_pods.go:61] "kube-scheduler-newest-cni-340756" [4673d267-6290-4f99-af1c-173b383aa4ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:21:44.139061  263568 system_pods.go:61] "storage-provisioner" [0e7d7956-489a-4005-ba49-4975f35bfc8a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:21:44.139068  263568 system_pods.go:74] duration metric: took 3.64168ms to wait for pod list to return data ...
	I1101 09:21:44.139078  263568 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:21:44.141225  263568 default_sa.go:45] found service account: "default"
	I1101 09:21:44.141244  263568 default_sa.go:55] duration metric: took 2.160133ms for default service account to be created ...
	I1101 09:21:44.141255  263568 kubeadm.go:587] duration metric: took 562.252373ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:21:44.141271  263568 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:21:44.143574  263568 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:21:44.143610  263568 node_conditions.go:123] node cpu capacity is 8
	I1101 09:21:44.143627  263568 node_conditions.go:105] duration metric: took 2.351387ms to run NodePressure ...
	I1101 09:21:44.143646  263568 start.go:242] waiting for startup goroutines ...
	I1101 09:21:44.414584  263568 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-340756" context rescaled to 1 replicas
	I1101 09:21:44.414622  263568 start.go:247] waiting for cluster config update ...
	I1101 09:21:44.414636  263568 start.go:256] writing updated cluster config ...
	I1101 09:21:44.415030  263568 ssh_runner.go:195] Run: rm -f paused
	I1101 09:21:44.473212  263568 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:21:44.475648  263568 out.go:179] * Done! kubectl is now configured to use "newest-cni-340756" cluster and "default" namespace by default
	W1101 09:21:39.840571  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:42.340952  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:42.341425  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:42.341493  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:42.341555  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:42.373344  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:42.373374  216020 cri.go:89] found id: ""
	I1101 09:21:42.373384  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:42.373448  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:42.377993  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:42.378055  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:42.407269  216020 cri.go:89] found id: ""
	I1101 09:21:42.407298  216020 logs.go:282] 0 containers: []
	W1101 09:21:42.407310  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:42.407318  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:42.407378  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:42.437089  216020 cri.go:89] found id: ""
	I1101 09:21:42.437118  216020 logs.go:282] 0 containers: []
	W1101 09:21:42.437129  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:42.437138  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:42.437191  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:42.471604  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:42.471635  216020 cri.go:89] found id: ""
	I1101 09:21:42.471644  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:42.471759  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:42.477331  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:42.477420  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:42.511388  216020 cri.go:89] found id: ""
	I1101 09:21:42.511416  216020 logs.go:282] 0 containers: []
	W1101 09:21:42.511427  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:42.511442  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:42.511500  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:42.544140  216020 cri.go:89] found id: "b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:42.544166  216020 cri.go:89] found id: ""
	I1101 09:21:42.544176  216020 logs.go:282] 1 containers: [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867]
	I1101 09:21:42.544242  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:42.549430  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:42.549508  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:42.584483  216020 cri.go:89] found id: ""
	I1101 09:21:42.584511  216020 logs.go:282] 0 containers: []
	W1101 09:21:42.584521  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:42.584529  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:42.584583  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:42.624167  216020 cri.go:89] found id: ""
	I1101 09:21:42.624193  216020 logs.go:282] 0 containers: []
	W1101 09:21:42.624203  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:42.624222  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:42.624239  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:42.642149  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:42.642183  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:42.709435  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:42.709460  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:42.709478  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:42.744665  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:42.744704  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:42.799594  216020 logs.go:123] Gathering logs for kube-controller-manager [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867] ...
	I1101 09:21:42.799630  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:42.829152  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:42.829179  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:42.892517  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:42.892550  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:42.926545  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:42.926570  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:42.180979  262357 addons.go:515] duration metric: took 574.698105ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:21:42.453943  262357 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-648641" context rescaled to 1 replicas
	W1101 09:21:43.954330  262357 node_ready.go:57] node "default-k8s-diff-port-648641" has "Ready":"False" status (will retry)
	W1101 09:21:45.954748  262357 node_ready.go:57] node "default-k8s-diff-port-648641" has "Ready":"False" status (will retry)
	I1101 09:21:45.522719  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:45.523114  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:45.523168  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:45.523215  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:45.553950  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:45.553977  216020 cri.go:89] found id: ""
	I1101 09:21:45.553987  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:45.554041  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:45.559131  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:45.559210  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:45.589679  216020 cri.go:89] found id: ""
	I1101 09:21:45.589705  216020 logs.go:282] 0 containers: []
	W1101 09:21:45.589713  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:45.589719  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:45.589771  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:45.619397  216020 cri.go:89] found id: ""
	I1101 09:21:45.619425  216020 logs.go:282] 0 containers: []
	W1101 09:21:45.619436  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:45.619443  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:45.619500  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:45.650557  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:45.650578  216020 cri.go:89] found id: ""
	I1101 09:21:45.650586  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:45.650638  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:45.654946  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:45.655017  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:45.683827  216020 cri.go:89] found id: ""
	I1101 09:21:45.683860  216020 logs.go:282] 0 containers: []
	W1101 09:21:45.683881  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:45.683889  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:45.683952  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:45.714615  216020 cri.go:89] found id: "b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:45.714636  216020 cri.go:89] found id: ""
	I1101 09:21:45.714643  216020 logs.go:282] 1 containers: [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867]
	I1101 09:21:45.714692  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:45.718908  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:45.718982  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:45.748477  216020 cri.go:89] found id: ""
	I1101 09:21:45.748504  216020 logs.go:282] 0 containers: []
	W1101 09:21:45.748512  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:45.748517  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:45.748574  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:45.779711  216020 cri.go:89] found id: ""
	I1101 09:21:45.779735  216020 logs.go:282] 0 containers: []
	W1101 09:21:45.779745  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:45.779757  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:45.779775  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:45.881616  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:45.881669  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:45.899657  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:45.899688  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:45.964390  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:45.964413  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:45.964431  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:46.000594  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:46.000632  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:46.060133  216020 logs.go:123] Gathering logs for kube-controller-manager [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867] ...
	I1101 09:21:46.060168  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:46.090130  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:46.090165  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:46.158765  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:46.158794  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:48.697045  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:48.697494  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:48.697551  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:48.697601  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:48.726560  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:48.726587  216020 cri.go:89] found id: ""
	I1101 09:21:48.726598  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:48.726658  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:48.731686  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:48.731760  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:48.768304  216020 cri.go:89] found id: ""
	I1101 09:21:48.768334  216020 logs.go:282] 0 containers: []
	W1101 09:21:48.768349  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:48.768356  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:48.768413  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:48.807267  216020 cri.go:89] found id: ""
	I1101 09:21:48.807294  216020 logs.go:282] 0 containers: []
	W1101 09:21:48.807305  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:48.807313  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:48.807368  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:48.840817  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:48.840847  216020 cri.go:89] found id: ""
	I1101 09:21:48.840857  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:48.841023  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:48.845921  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:48.845994  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:48.881586  216020 cri.go:89] found id: ""
	I1101 09:21:48.881615  216020 logs.go:282] 0 containers: []
	W1101 09:21:48.881625  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:48.881633  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:48.881779  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:48.915051  216020 cri.go:89] found id: "b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:48.915077  216020 cri.go:89] found id: ""
	I1101 09:21:48.915084  216020 logs.go:282] 1 containers: [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867]
	I1101 09:21:48.915130  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:48.919614  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:48.919687  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:48.948096  216020 cri.go:89] found id: ""
	I1101 09:21:48.948119  216020 logs.go:282] 0 containers: []
	W1101 09:21:48.948127  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:48.948134  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:48.948189  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:48.979089  216020 cri.go:89] found id: ""
	I1101 09:21:48.979117  216020 logs.go:282] 0 containers: []
	W1101 09:21:48.979127  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:48.979138  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:48.979155  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:49.034966  216020 logs.go:123] Gathering logs for kube-controller-manager [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867] ...
	I1101 09:21:49.035003  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:49.063745  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:49.063772  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:49.122882  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:49.122918  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:49.156060  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:49.156092  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:49.254647  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:49.254688  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:49.271698  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:49.271738  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:49.332679  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:49.332708  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:49.332729  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	W1101 09:21:47.954809  262357 node_ready.go:57] node "default-k8s-diff-port-648641" has "Ready":"False" status (will retry)
	W1101 09:21:49.954891  262357 node_ready.go:57] node "default-k8s-diff-port-648641" has "Ready":"False" status (will retry)
	I1101 09:21:51.867378  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:51.867845  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:51.867942  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:21:51.867990  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:21:51.897796  216020 cri.go:89] found id: "b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:51.897822  216020 cri.go:89] found id: ""
	I1101 09:21:51.897831  216020 logs.go:282] 1 containers: [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f]
	I1101 09:21:51.897908  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:51.902172  216020 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:21:51.902249  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:21:51.932998  216020 cri.go:89] found id: ""
	I1101 09:21:51.933022  216020 logs.go:282] 0 containers: []
	W1101 09:21:51.933033  216020 logs.go:284] No container was found matching "etcd"
	I1101 09:21:51.933040  216020 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:21:51.933099  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:21:51.961795  216020 cri.go:89] found id: ""
	I1101 09:21:51.961819  216020 logs.go:282] 0 containers: []
	W1101 09:21:51.961827  216020 logs.go:284] No container was found matching "coredns"
	I1101 09:21:51.961832  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:21:51.961898  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:21:51.990289  216020 cri.go:89] found id: "7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	I1101 09:21:51.990309  216020 cri.go:89] found id: ""
	I1101 09:21:51.990316  216020 logs.go:282] 1 containers: [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8]
	I1101 09:21:51.990372  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:51.994608  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:21:51.994677  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:21:52.022796  216020 cri.go:89] found id: ""
	I1101 09:21:52.022825  216020 logs.go:282] 0 containers: []
	W1101 09:21:52.022837  216020 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:21:52.022844  216020 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:21:52.022927  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:21:52.051439  216020 cri.go:89] found id: "b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:52.051459  216020 cri.go:89] found id: ""
	I1101 09:21:52.051466  216020 logs.go:282] 1 containers: [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867]
	I1101 09:21:52.051520  216020 ssh_runner.go:195] Run: which crictl
	I1101 09:21:52.055672  216020 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:21:52.055747  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:21:52.084243  216020 cri.go:89] found id: ""
	I1101 09:21:52.084271  216020 logs.go:282] 0 containers: []
	W1101 09:21:52.084279  216020 logs.go:284] No container was found matching "kindnet"
	I1101 09:21:52.084285  216020 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:21:52.084345  216020 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:21:52.112387  216020 cri.go:89] found id: ""
	I1101 09:21:52.112410  216020 logs.go:282] 0 containers: []
	W1101 09:21:52.112417  216020 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:21:52.112426  216020 logs.go:123] Gathering logs for kube-controller-manager [b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867] ...
	I1101 09:21:52.112438  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b266a6bfcdcb5bd3d3e21a7e3af65c4839577648b3116ca9c382cea59b909867"
	I1101 09:21:52.140855  216020 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:21:52.140905  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:21:52.197633  216020 logs.go:123] Gathering logs for container status ...
	I1101 09:21:52.197670  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:21:52.230192  216020 logs.go:123] Gathering logs for kubelet ...
	I1101 09:21:52.230220  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:21:52.325219  216020 logs.go:123] Gathering logs for dmesg ...
	I1101 09:21:52.325258  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:21:52.343387  216020 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:21:52.343421  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:21:52.402463  216020 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:21:52.402489  216020 logs.go:123] Gathering logs for kube-apiserver [b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f] ...
	I1101 09:21:52.402504  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b88c1a6c22eb7f0b872bf19c02addfdad95c9f578a8590d19c57a767cb19559f"
	I1101 09:21:52.436914  216020 logs.go:123] Gathering logs for kube-scheduler [7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8] ...
	I1101 09:21:52.436947  216020 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7bc51deb4b19b20f402b361941a1b67db11ece9919ebd34259e751ec1f07abb8"
	W1101 09:21:52.454402  262357 node_ready.go:57] node "default-k8s-diff-port-648641" has "Ready":"False" status (will retry)
	I1101 09:21:53.456327  262357 node_ready.go:49] node "default-k8s-diff-port-648641" is "Ready"
	I1101 09:21:53.456364  262357 node_ready.go:38] duration metric: took 11.505521672s for node "default-k8s-diff-port-648641" to be "Ready" ...
	I1101 09:21:53.456382  262357 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:21:53.456439  262357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:21:53.472403  262357 api_server.go:72] duration metric: took 11.866171832s to wait for apiserver process to appear ...
	I1101 09:21:53.472434  262357 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:21:53.472456  262357 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1101 09:21:53.478489  262357 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1101 09:21:53.479497  262357 api_server.go:141] control plane version: v1.34.1
	I1101 09:21:53.479525  262357 api_server.go:131] duration metric: took 7.082976ms to wait for apiserver health ...
	I1101 09:21:53.479536  262357 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:21:53.483252  262357 system_pods.go:59] 8 kube-system pods found
	I1101 09:21:53.483295  262357 system_pods.go:61] "coredns-66bc5c9577-nwj2s" [8ff7491d-6812-4d96-a51a-e633029265b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:21:53.483306  262357 system_pods.go:61] "etcd-default-k8s-diff-port-648641" [6e3ca262-a470-4da9-808d-7fc96780750a] Running
	I1101 09:21:53.483323  262357 system_pods.go:61] "kindnet-fr9cg" [d6592d6f-2eb5-439c-8432-879dfed97262] Running
	I1101 09:21:53.483328  262357 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-648641" [26eb68dd-b2b9-4b7f-b451-153ea7e07a22] Running
	I1101 09:21:53.483332  262357 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-648641" [b81d16be-9a35-49d2-be62-ed4120d926c4] Running
	I1101 09:21:53.483337  262357 system_pods.go:61] "kube-proxy-nwrt4" [654df017-7b12-4834-b1af-10bb81208e93] Running
	I1101 09:21:53.483342  262357 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-648641" [6491deee-ae77-4f3b-974a-1d9cce461061] Running
	I1101 09:21:53.483351  262357 system_pods.go:61] "storage-provisioner" [740e55f6-d6f6-423d-8f2b-8b68885e6d6b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:21:53.483359  262357 system_pods.go:74] duration metric: took 3.817126ms to wait for pod list to return data ...
	I1101 09:21:53.483374  262357 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:21:53.485744  262357 default_sa.go:45] found service account: "default"
	I1101 09:21:53.485767  262357 default_sa.go:55] duration metric: took 2.384931ms for default service account to be created ...
	I1101 09:21:53.485779  262357 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:21:53.488171  262357 system_pods.go:86] 8 kube-system pods found
	I1101 09:21:53.488197  262357 system_pods.go:89] "coredns-66bc5c9577-nwj2s" [8ff7491d-6812-4d96-a51a-e633029265b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:21:53.488202  262357 system_pods.go:89] "etcd-default-k8s-diff-port-648641" [6e3ca262-a470-4da9-808d-7fc96780750a] Running
	I1101 09:21:53.488208  262357 system_pods.go:89] "kindnet-fr9cg" [d6592d6f-2eb5-439c-8432-879dfed97262] Running
	I1101 09:21:53.488212  262357 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-648641" [26eb68dd-b2b9-4b7f-b451-153ea7e07a22] Running
	I1101 09:21:53.488215  262357 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-648641" [b81d16be-9a35-49d2-be62-ed4120d926c4] Running
	I1101 09:21:53.488218  262357 system_pods.go:89] "kube-proxy-nwrt4" [654df017-7b12-4834-b1af-10bb81208e93] Running
	I1101 09:21:53.488221  262357 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-648641" [6491deee-ae77-4f3b-974a-1d9cce461061] Running
	I1101 09:21:53.488230  262357 system_pods.go:89] "storage-provisioner" [740e55f6-d6f6-423d-8f2b-8b68885e6d6b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:21:53.488247  262357 retry.go:31] will retry after 213.117858ms: missing components: kube-dns
	I1101 09:21:53.705230  262357 system_pods.go:86] 8 kube-system pods found
	I1101 09:21:53.705264  262357 system_pods.go:89] "coredns-66bc5c9577-nwj2s" [8ff7491d-6812-4d96-a51a-e633029265b2] Running
	I1101 09:21:53.705270  262357 system_pods.go:89] "etcd-default-k8s-diff-port-648641" [6e3ca262-a470-4da9-808d-7fc96780750a] Running
	I1101 09:21:53.705276  262357 system_pods.go:89] "kindnet-fr9cg" [d6592d6f-2eb5-439c-8432-879dfed97262] Running
	I1101 09:21:53.705279  262357 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-648641" [26eb68dd-b2b9-4b7f-b451-153ea7e07a22] Running
	I1101 09:21:53.705284  262357 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-648641" [b81d16be-9a35-49d2-be62-ed4120d926c4] Running
	I1101 09:21:53.705289  262357 system_pods.go:89] "kube-proxy-nwrt4" [654df017-7b12-4834-b1af-10bb81208e93] Running
	I1101 09:21:53.705294  262357 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-648641" [6491deee-ae77-4f3b-974a-1d9cce461061] Running
	I1101 09:21:53.705299  262357 system_pods.go:89] "storage-provisioner" [740e55f6-d6f6-423d-8f2b-8b68885e6d6b] Running
	I1101 09:21:53.705309  262357 system_pods.go:126] duration metric: took 219.522684ms to wait for k8s-apps to be running ...
	I1101 09:21:53.705322  262357 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:21:53.705373  262357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:21:53.719220  262357 system_svc.go:56] duration metric: took 13.885698ms WaitForService to wait for kubelet
	I1101 09:21:53.719251  262357 kubeadm.go:587] duration metric: took 12.113026571s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:21:53.719270  262357 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:21:53.722638  262357 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:21:53.722666  262357 node_conditions.go:123] node cpu capacity is 8
	I1101 09:21:53.722682  262357 node_conditions.go:105] duration metric: took 3.407419ms to run NodePressure ...
	I1101 09:21:53.722695  262357 start.go:242] waiting for startup goroutines ...
	I1101 09:21:53.722702  262357 start.go:247] waiting for cluster config update ...
	I1101 09:21:53.722716  262357 start.go:256] writing updated cluster config ...
	I1101 09:21:53.723062  262357 ssh_runner.go:195] Run: rm -f paused
	I1101 09:21:53.727150  262357 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:21:53.731191  262357 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nwj2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:53.736190  262357 pod_ready.go:94] pod "coredns-66bc5c9577-nwj2s" is "Ready"
	I1101 09:21:53.736217  262357 pod_ready.go:86] duration metric: took 4.999753ms for pod "coredns-66bc5c9577-nwj2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:53.738494  262357 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-648641" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:53.743092  262357 pod_ready.go:94] pod "etcd-default-k8s-diff-port-648641" is "Ready"
	I1101 09:21:53.743120  262357 pod_ready.go:86] duration metric: took 4.60429ms for pod "etcd-default-k8s-diff-port-648641" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:53.745309  262357 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-648641" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:53.749633  262357 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-648641" is "Ready"
	I1101 09:21:53.749659  262357 pod_ready.go:86] duration metric: took 4.325762ms for pod "kube-apiserver-default-k8s-diff-port-648641" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:53.752042  262357 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-648641" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:54.131741  262357 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-648641" is "Ready"
	I1101 09:21:54.131770  262357 pod_ready.go:86] duration metric: took 379.697888ms for pod "kube-controller-manager-default-k8s-diff-port-648641" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:54.332232  262357 pod_ready.go:83] waiting for pod "kube-proxy-nwrt4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:54.731904  262357 pod_ready.go:94] pod "kube-proxy-nwrt4" is "Ready"
	I1101 09:21:54.731936  262357 pod_ready.go:86] duration metric: took 399.676133ms for pod "kube-proxy-nwrt4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:54.932695  262357 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-648641" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:55.331553  262357 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-648641" is "Ready"
	I1101 09:21:55.331586  262357 pod_ready.go:86] duration metric: took 398.857413ms for pod "kube-scheduler-default-k8s-diff-port-648641" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:21:55.331602  262357 pod_ready.go:40] duration metric: took 1.604421349s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:21:55.379607  262357 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:21:55.381383  262357 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-648641" cluster and "default" namespace by default
	I1101 09:21:54.994699  216020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:21:54.995256  216020 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 09:21:54.995319  216020 kubeadm.go:602] duration metric: took 4m3.825805261s to restartPrimaryControlPlane
	W1101 09:21:54.995377  216020 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1101 09:21:54.995429  216020 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1101 09:21:55.586091  216020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:21:55.599281  216020 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:21:55.608013  216020 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:21:55.608082  216020 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:21:55.616561  216020 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:21:55.616581  216020 kubeadm.go:158] found existing configuration files:
	
	I1101 09:21:55.616630  216020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:21:55.624973  216020 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:21:55.625038  216020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:21:55.633638  216020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:21:55.642735  216020 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:21:55.642800  216020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:21:55.657690  216020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:21:55.666461  216020 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:21:55.666539  216020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:21:55.674770  216020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:21:55.683528  216020 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:21:55.683594  216020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:21:55.691967  216020 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:21:55.730804  216020 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:21:55.730905  216020 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:21:55.754428  216020 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:21:55.754532  216020 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 09:21:55.754607  216020 kubeadm.go:319] OS: Linux
	I1101 09:21:55.754703  216020 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:21:55.754784  216020 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:21:55.754858  216020 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:21:55.754956  216020 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:21:55.755036  216020 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:21:55.755110  216020 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:21:55.755201  216020 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:21:55.755281  216020 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 09:21:55.821358  216020 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:21:55.821534  216020 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:21:55.821694  216020 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:21:55.829563  216020 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:21:55.833984  216020 out.go:252]   - Generating certificates and keys ...
	I1101 09:21:55.834120  216020 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:21:55.834256  216020 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:21:55.834382  216020 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 09:21:55.834459  216020 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1101 09:21:55.834519  216020 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 09:21:55.834582  216020 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1101 09:21:55.834677  216020 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1101 09:21:55.834786  216020 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1101 09:21:55.834900  216020 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 09:21:55.835064  216020 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 09:21:55.835127  216020 kubeadm.go:319] [certs] Using the existing "sa" key
	I1101 09:21:55.835241  216020 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:21:56.199261  216020 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:21:56.382368  216020 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:21:56.479307  216020 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:21:56.966146  216020 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:21:57.443893  216020 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:21:57.444375  216020 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:21:57.446500  216020 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Nov 01 09:21:22 embed-certs-236314 crio[560]: time="2025-11-01T09:21:22.764911642Z" level=info msg="Created container ab9a1c2871ebfdf28b52214510e1799784842fc9e5a2a4f8ac62fa64668e5010: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vtj9p/kubernetes-dashboard" id=76035249-0966-41a3-92c1-1ec3c45ba712 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:21:22 embed-certs-236314 crio[560]: time="2025-11-01T09:21:22.765960596Z" level=info msg="Starting container: ab9a1c2871ebfdf28b52214510e1799784842fc9e5a2a4f8ac62fa64668e5010" id=66b39d30-dcd0-4573-97cb-edc26b67b88c name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:21:22 embed-certs-236314 crio[560]: time="2025-11-01T09:21:22.768527358Z" level=info msg="Started container" PID=1710 containerID=ab9a1c2871ebfdf28b52214510e1799784842fc9e5a2a4f8ac62fa64668e5010 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vtj9p/kubernetes-dashboard id=66b39d30-dcd0-4573-97cb-edc26b67b88c name=/runtime.v1.RuntimeService/StartContainer sandboxID=a3e0275e5ce4b52cf9c745e50b5d55c6ccc559a87ae59a82249bd37a745f6d55
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.290905939Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=01184b61-6da0-4251-84cf-5e172e8093f2 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.292146638Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cea464fc-9e08-4187-9022-a248febc38eb name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.293683727Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p/dashboard-metrics-scraper" id=a98e1bd8-b0ae-490b-a817-ae6aaacbc6d6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.293843662Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.301061939Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.301715895Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.350650395Z" level=info msg="Created container 97e232d23f29552301319ab346cf13e85b89566b637b24177cd78bdfb630fd2f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p/dashboard-metrics-scraper" id=a98e1bd8-b0ae-490b-a817-ae6aaacbc6d6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.351516772Z" level=info msg="Starting container: 97e232d23f29552301319ab346cf13e85b89566b637b24177cd78bdfb630fd2f" id=bdb3a04e-82a2-4bc2-8cb5-10c94ca06458 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.354257615Z" level=info msg="Started container" PID=1729 containerID=97e232d23f29552301319ab346cf13e85b89566b637b24177cd78bdfb630fd2f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p/dashboard-metrics-scraper id=bdb3a04e-82a2-4bc2-8cb5-10c94ca06458 name=/runtime.v1.RuntimeService/StartContainer sandboxID=140e0c3418f98a0793f2c6f68d4699316bf74c7bd5f51c10cfcabd622151687e
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.433465582Z" level=info msg="Removing container: c4cc50c8c91718cdd4a830baa14948c9260b14397567a66243b02ec45b341317" id=2eadbd93-8ac5-4af1-b722-ce57065d8ced name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.452573773Z" level=info msg="Removed container c4cc50c8c91718cdd4a830baa14948c9260b14397567a66243b02ec45b341317: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p/dashboard-metrics-scraper" id=2eadbd93-8ac5-4af1-b722-ce57065d8ced name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.432955431Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ede4e327-cad3-4f0a-8dae-47a8cb99730b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.433937954Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fb61cea0-478c-4dd4-8e2f-085d12935c14 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.435088398Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ea3fe609-9058-45f5-8983-267ec8c3e5a9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.435222827Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.439536824Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.439695759Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/cc0de986ca118965e38efc0e6848f19f6137ed66f7ca9a8754d32a7a44524164/merged/etc/passwd: no such file or directory"
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.439720388Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/cc0de986ca118965e38efc0e6848f19f6137ed66f7ca9a8754d32a7a44524164/merged/etc/group: no such file or directory"
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.439982305Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.46561482Z" level=info msg="Created container 6c99ae25ef0e9393dddf231085bca13268e9d35c7587c2535d9874ef0b8bc855: kube-system/storage-provisioner/storage-provisioner" id=ea3fe609-9058-45f5-8983-267ec8c3e5a9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.466271061Z" level=info msg="Starting container: 6c99ae25ef0e9393dddf231085bca13268e9d35c7587c2535d9874ef0b8bc855" id=7dd966a8-c5fa-4f6e-b813-2e9d852bd295 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.468486867Z" level=info msg="Started container" PID=1746 containerID=6c99ae25ef0e9393dddf231085bca13268e9d35c7587c2535d9874ef0b8bc855 description=kube-system/storage-provisioner/storage-provisioner id=7dd966a8-c5fa-4f6e-b813-2e9d852bd295 name=/runtime.v1.RuntimeService/StartContainer sandboxID=00ef337336f28a6b16d03c1f8a448ce269ca75121c5dd095a114322e5ecf1816
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	6c99ae25ef0e9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   00ef337336f28       storage-provisioner                          kube-system
	97e232d23f295       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   140e0c3418f98       dashboard-metrics-scraper-6ffb444bf9-zt69p   kubernetes-dashboard
	ab9a1c2871ebf       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   36 seconds ago      Running             kubernetes-dashboard        0                   a3e0275e5ce4b       kubernetes-dashboard-855c9754f9-vtj9p        kubernetes-dashboard
	12559a21df4b0       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   04907abdd6503       busybox                                      default
	8d11e282bdb58       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   60f3b575e0bd4       coredns-66bc5c9577-wwvth                     kube-system
	c76b1cf0e992c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   a95572f26ed91       kindnet-mf8mj                                kube-system
	ff5eeb3598d0e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   00ef337336f28       storage-provisioner                          kube-system
	cf0921be2c864       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           49 seconds ago      Running             kube-proxy                  0                   875d6e71db9c1       kube-proxy-55ft8                             kube-system
	bca3056e43561       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           51 seconds ago      Running             etcd                        0                   b45a69f6f08e2       etcd-embed-certs-236314                      kube-system
	cdf866b372073       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           51 seconds ago      Running             kube-scheduler              0                   95521059ca665       kube-scheduler-embed-certs-236314            kube-system
	c53066ca825ef       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           51 seconds ago      Running             kube-controller-manager     0                   1b3862d595106       kube-controller-manager-embed-certs-236314   kube-system
	63c22508cf705       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           51 seconds ago      Running             kube-apiserver              0                   1d2cdfe596f49       kube-apiserver-embed-certs-236314            kube-system
	
	
	==> coredns [8d11e282bdb581fc10907660c4ed84334e43ee3c72fbd91f47dfa5bd7fadf948] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34944 - 31645 "HINFO IN 3901251451262326739.5616493826336659462. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.07433026s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-236314
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-236314
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=embed-certs-236314
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_20_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:20:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-236314
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:21:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:21:49 +0000   Sat, 01 Nov 2025 09:20:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:21:49 +0000   Sat, 01 Nov 2025 09:20:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:21:49 +0000   Sat, 01 Nov 2025 09:20:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:21:49 +0000   Sat, 01 Nov 2025 09:20:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-236314
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                dee9e247-3614-413a-be45-584e8f9ead09
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-66bc5c9577-wwvth                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     100s
	  kube-system                 etcd-embed-certs-236314                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         107s
	  kube-system                 kindnet-mf8mj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      100s
	  kube-system                 kube-apiserver-embed-certs-236314             250m (3%)     0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-controller-manager-embed-certs-236314    200m (2%)     0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-proxy-55ft8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-scheduler-embed-certs-236314             100m (1%)     0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zt69p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vtj9p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 99s                  kube-proxy       
	  Normal  Starting                 48s                  kube-proxy       
	  Normal  Starting                 110s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  110s (x8 over 110s)  kubelet          Node embed-certs-236314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s (x8 over 110s)  kubelet          Node embed-certs-236314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s (x8 over 110s)  kubelet          Node embed-certs-236314 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    105s                 kubelet          Node embed-certs-236314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  105s                 kubelet          Node embed-certs-236314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     105s                 kubelet          Node embed-certs-236314 status is now: NodeHasSufficientPID
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           101s                 node-controller  Node embed-certs-236314 event: Registered Node embed-certs-236314 in Controller
	  Normal  NodeReady                89s                  kubelet          Node embed-certs-236314 status is now: NodeReady
	  Normal  Starting                 52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)    kubelet          Node embed-certs-236314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)    kubelet          Node embed-certs-236314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)    kubelet          Node embed-certs-236314 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                  node-controller  Node embed-certs-236314 event: Registered Node embed-certs-236314 in Controller
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.047726] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[Nov 1 08:38] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.215674] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.047939] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000023] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023915] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023867] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023880] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000033] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023884] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +4.031524] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +8.255123] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	
	
	==> etcd [bca3056e4356124989f2b2cba8377cf3f660970574583fcca877cb776005e6ca] <==
	{"level":"warn","ts":"2025-11-01T09:21:08.205130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.211269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.217359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.224507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.239336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.246649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.253564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.261516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.269760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.278195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.287303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.304042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.312361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.326177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.330330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.339060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.347137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.406294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49144","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:21:16.080441Z","caller":"traceutil/trace.go:172","msg":"trace[671642900] transaction","detail":"{read_only:false; response_revision:543; number_of_response:1; }","duration":"134.696902ms","start":"2025-11-01T09:21:15.945721Z","end":"2025-11-01T09:21:16.080418Z","steps":["trace[671642900] 'process raft request'  (duration: 134.576372ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:21:16.302319Z","caller":"traceutil/trace.go:172","msg":"trace[2109263290] transaction","detail":"{read_only:false; response_revision:545; number_of_response:1; }","duration":"124.737748ms","start":"2025-11-01T09:21:16.177560Z","end":"2025-11-01T09:21:16.302297Z","steps":["trace[2109263290] 'process raft request'  (duration: 123.585591ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:21:21.029257Z","caller":"traceutil/trace.go:172","msg":"trace[188115137] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"174.299461ms","start":"2025-11-01T09:21:20.854940Z","end":"2025-11-01T09:21:21.029239Z","steps":["trace[188115137] 'process raft request'  (duration: 161.273591ms)","trace[188115137] 'compare'  (duration: 12.923317ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:21:22.978394Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.303584ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356350263845855 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-csrf\" mod_revision:491 > success:<request_put:<key:\"/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-csrf\" value_size:1272 >> failure:<request_range:<key:\"/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-csrf\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T09:21:22.978608Z","caller":"traceutil/trace.go:172","msg":"trace[1497868239] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"124.201304ms","start":"2025-11-01T09:21:22.854391Z","end":"2025-11-01T09:21:22.978592Z","steps":["trace[1497868239] 'process raft request'  (duration: 124.122037ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:21:22.978616Z","caller":"traceutil/trace.go:172","msg":"trace[2133583357] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"141.516671ms","start":"2025-11-01T09:21:22.837087Z","end":"2025-11-01T09:21:22.978604Z","steps":["trace[2133583357] 'process raft request'  (duration: 14.411562ms)","trace[2133583357] 'compare'  (duration: 126.164512ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:21:23.359409Z","caller":"traceutil/trace.go:172","msg":"trace[693850273] transaction","detail":"{read_only:false; response_revision:626; number_of_response:1; }","duration":"130.458251ms","start":"2025-11-01T09:21:23.228930Z","end":"2025-11-01T09:21:23.359388Z","steps":["trace[693850273] 'process raft request'  (duration: 130.312652ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:21:58 up  1:04,  0 user,  load average: 3.90, 2.90, 1.73
	Linux embed-certs-236314 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c76b1cf0e992cc091c5557f5c0067cc245d9e9be10f9683721fbc495f757f1dd] <==
	I1101 09:21:09.819968       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:21:09.820274       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 09:21:09.820480       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:21:09.820503       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:21:09.820532       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:21:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:21:10.117476       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:21:10.117576       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:21:10.117625       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:21:10.117815       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:21:10.413558       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:21:10.413625       1 metrics.go:72] Registering metrics
	I1101 09:21:10.413745       1 controller.go:711] "Syncing nftables rules"
	I1101 09:21:20.118008       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:21:20.118144       1 main.go:301] handling current node
	I1101 09:21:30.120952       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:21:30.121025       1 main.go:301] handling current node
	I1101 09:21:40.117567       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:21:40.117607       1 main.go:301] handling current node
	I1101 09:21:50.117906       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:21:50.117940       1 main.go:301] handling current node
	
	
	==> kube-apiserver [63c22508cf7059b3b3f3d3dca5c0c8bae9ba37801ed8914d301b3b69f0fc7f4d] <==
	I1101 09:21:08.931746       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:21:08.931782       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:21:08.931813       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:21:08.931823       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:21:08.931828       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:21:08.931834       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:21:08.932088       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 09:21:08.932116       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:21:08.932135       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:21:08.932161       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:21:08.932174       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:21:08.933786       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:21:08.940888       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:21:08.970613       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:21:09.229890       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:21:09.266847       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:21:09.291346       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:21:09.302486       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:21:09.312758       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:21:09.381103       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.69.19"}
	I1101 09:21:09.405587       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.189.3"}
	I1101 09:21:09.835629       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:21:12.268831       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:21:12.617536       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:21:12.767467       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c53066ca825ef150c1b3480d4c681c275883620b56bfc97b3e50480bdd6dc761] <==
	I1101 09:21:12.256491       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:21:12.259787       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:21:12.261925       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:21:12.263175       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:21:12.263210       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:21:12.263240       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:21:12.263258       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:21:12.263299       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:21:12.263244       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:21:12.263643       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:21:12.264101       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:21:12.264270       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:21:12.264383       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:21:12.264481       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-236314"
	I1101 09:21:12.264527       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:21:12.264653       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:21:12.268201       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:21:12.272332       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:21:12.285578       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:21:12.285658       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:21:12.285707       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:21:12.285714       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:21:12.285719       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:21:12.288926       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:21:12.295138       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [cf0921be2c864b0ad5e89bbcde93cfdeb7214cf2e8fbeeb40447ed91e7d93636] <==
	I1101 09:21:09.708756       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:21:09.762813       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:21:09.863441       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:21:09.863679       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 09:21:09.863797       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:21:09.894147       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:21:09.894233       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:21:09.901928       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:21:09.902471       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:21:09.902562       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:21:09.906960       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:21:09.907173       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:21:09.907056       1 config.go:200] "Starting service config controller"
	I1101 09:21:09.907598       1 config.go:309] "Starting node config controller"
	I1101 09:21:09.907638       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:21:09.907612       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:21:09.907851       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:21:09.907891       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:21:10.008613       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:21:10.008645       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:21:10.008729       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:21:10.008762       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cdf866b372073a7755ed447cdf8634d89a5c22e16db02cc9cfe7c76643d51a6c] <==
	I1101 09:21:07.638181       1 serving.go:386] Generated self-signed cert in-memory
	W1101 09:21:08.851839       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:21:08.851903       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:21:08.851916       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:21:08.851926       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:21:08.903734       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:21:08.903764       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:21:08.906674       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:21:08.906715       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:21:08.907217       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:21:08.907260       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:21:09.006985       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:21:11 embed-certs-236314 kubelet[715]: I1101 09:21:11.627564     715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 09:21:13 embed-certs-236314 kubelet[715]: I1101 09:21:13.011262     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e4c34dbc-f680-46c7-92ea-18532ff6d5f0-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-vtj9p\" (UID: \"e4c34dbc-f680-46c7-92ea-18532ff6d5f0\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vtj9p"
	Nov 01 09:21:13 embed-certs-236314 kubelet[715]: I1101 09:21:13.011328     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m52hh\" (UniqueName: \"kubernetes.io/projected/e4c34dbc-f680-46c7-92ea-18532ff6d5f0-kube-api-access-m52hh\") pod \"kubernetes-dashboard-855c9754f9-vtj9p\" (UID: \"e4c34dbc-f680-46c7-92ea-18532ff6d5f0\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vtj9p"
	Nov 01 09:21:13 embed-certs-236314 kubelet[715]: I1101 09:21:13.011360     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2a276c9f-90da-419c-847f-6569a61c530c-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-zt69p\" (UID: \"2a276c9f-90da-419c-847f-6569a61c530c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p"
	Nov 01 09:21:13 embed-certs-236314 kubelet[715]: I1101 09:21:13.011497     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f6dp\" (UniqueName: \"kubernetes.io/projected/2a276c9f-90da-419c-847f-6569a61c530c-kube-api-access-4f6dp\") pod \"dashboard-metrics-scraper-6ffb444bf9-zt69p\" (UID: \"2a276c9f-90da-419c-847f-6569a61c530c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p"
	Nov 01 09:21:17 embed-certs-236314 kubelet[715]: I1101 09:21:17.356664     715 scope.go:117] "RemoveContainer" containerID="a15942aa21a01eaffc764dcad49f99cdbf20b7627734f2a9353411b6d3a0345e"
	Nov 01 09:21:18 embed-certs-236314 kubelet[715]: I1101 09:21:18.363264     715 scope.go:117] "RemoveContainer" containerID="a15942aa21a01eaffc764dcad49f99cdbf20b7627734f2a9353411b6d3a0345e"
	Nov 01 09:21:18 embed-certs-236314 kubelet[715]: I1101 09:21:18.363846     715 scope.go:117] "RemoveContainer" containerID="c4cc50c8c91718cdd4a830baa14948c9260b14397567a66243b02ec45b341317"
	Nov 01 09:21:18 embed-certs-236314 kubelet[715]: E1101 09:21:18.364124     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zt69p_kubernetes-dashboard(2a276c9f-90da-419c-847f-6569a61c530c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p" podUID="2a276c9f-90da-419c-847f-6569a61c530c"
	Nov 01 09:21:19 embed-certs-236314 kubelet[715]: I1101 09:21:19.368508     715 scope.go:117] "RemoveContainer" containerID="c4cc50c8c91718cdd4a830baa14948c9260b14397567a66243b02ec45b341317"
	Nov 01 09:21:19 embed-certs-236314 kubelet[715]: E1101 09:21:19.368758     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zt69p_kubernetes-dashboard(2a276c9f-90da-419c-847f-6569a61c530c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p" podUID="2a276c9f-90da-419c-847f-6569a61c530c"
	Nov 01 09:21:23 embed-certs-236314 kubelet[715]: I1101 09:21:23.395546     715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vtj9p" podStartSLOduration=2.233380409 podStartE2EDuration="11.395518274s" podCreationTimestamp="2025-11-01 09:21:12 +0000 UTC" firstStartedPulling="2025-11-01 09:21:13.403418027 +0000 UTC m=+7.231927342" lastFinishedPulling="2025-11-01 09:21:22.565555911 +0000 UTC m=+16.394065207" observedRunningTime="2025-11-01 09:21:23.39458662 +0000 UTC m=+17.223095935" watchObservedRunningTime="2025-11-01 09:21:23.395518274 +0000 UTC m=+17.224027594"
	Nov 01 09:21:26 embed-certs-236314 kubelet[715]: I1101 09:21:26.303639     715 scope.go:117] "RemoveContainer" containerID="c4cc50c8c91718cdd4a830baa14948c9260b14397567a66243b02ec45b341317"
	Nov 01 09:21:26 embed-certs-236314 kubelet[715]: E1101 09:21:26.303847     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zt69p_kubernetes-dashboard(2a276c9f-90da-419c-847f-6569a61c530c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p" podUID="2a276c9f-90da-419c-847f-6569a61c530c"
	Nov 01 09:21:39 embed-certs-236314 kubelet[715]: I1101 09:21:39.290240     715 scope.go:117] "RemoveContainer" containerID="c4cc50c8c91718cdd4a830baa14948c9260b14397567a66243b02ec45b341317"
	Nov 01 09:21:39 embed-certs-236314 kubelet[715]: I1101 09:21:39.428063     715 scope.go:117] "RemoveContainer" containerID="c4cc50c8c91718cdd4a830baa14948c9260b14397567a66243b02ec45b341317"
	Nov 01 09:21:39 embed-certs-236314 kubelet[715]: I1101 09:21:39.428354     715 scope.go:117] "RemoveContainer" containerID="97e232d23f29552301319ab346cf13e85b89566b637b24177cd78bdfb630fd2f"
	Nov 01 09:21:39 embed-certs-236314 kubelet[715]: E1101 09:21:39.428542     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zt69p_kubernetes-dashboard(2a276c9f-90da-419c-847f-6569a61c530c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p" podUID="2a276c9f-90da-419c-847f-6569a61c530c"
	Nov 01 09:21:40 embed-certs-236314 kubelet[715]: I1101 09:21:40.432490     715 scope.go:117] "RemoveContainer" containerID="ff5eeb3598d0ee0d8632ba6b2c43ba490782a9c06cdab6d790fbd85ba9094d8e"
	Nov 01 09:21:46 embed-certs-236314 kubelet[715]: I1101 09:21:46.304344     715 scope.go:117] "RemoveContainer" containerID="97e232d23f29552301319ab346cf13e85b89566b637b24177cd78bdfb630fd2f"
	Nov 01 09:21:46 embed-certs-236314 kubelet[715]: E1101 09:21:46.304682     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zt69p_kubernetes-dashboard(2a276c9f-90da-419c-847f-6569a61c530c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p" podUID="2a276c9f-90da-419c-847f-6569a61c530c"
	Nov 01 09:21:55 embed-certs-236314 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:21:55 embed-certs-236314 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:21:55 embed-certs-236314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:21:55 embed-certs-236314 systemd[1]: kubelet.service: Consumed 1.769s CPU time.
	
	
	==> kubernetes-dashboard [ab9a1c2871ebfdf28b52214510e1799784842fc9e5a2a4f8ac62fa64668e5010] <==
	2025/11/01 09:21:22 Starting overwatch
	2025/11/01 09:21:22 Using namespace: kubernetes-dashboard
	2025/11/01 09:21:22 Using in-cluster config to connect to apiserver
	2025/11/01 09:21:22 Using secret token for csrf signing
	2025/11/01 09:21:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:21:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:21:22 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:21:22 Generating JWE encryption key
	2025/11/01 09:21:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:21:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:21:23 Initializing JWE encryption key from synchronized object
	2025/11/01 09:21:23 Creating in-cluster Sidecar client
	2025/11/01 09:21:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:21:23 Serving insecurely on HTTP port: 9090
	2025/11/01 09:21:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6c99ae25ef0e9393dddf231085bca13268e9d35c7587c2535d9874ef0b8bc855] <==
	I1101 09:21:40.481213       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:21:40.488672       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:21:40.488711       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:21:40.491118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:43.948936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:48.209610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:51.807968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:54.862316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:57.884737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:57.890964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:21:57.891137       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:21:57.891328       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-236314_37c25550-526d-4c69-923c-0ab37098c4f0!
	I1101 09:21:57.891358       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c262ede2-8436-41d8-b457-ce72d2fd66a5", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-236314_37c25550-526d-4c69-923c-0ab37098c4f0 became leader
	W1101 09:21:57.893563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:57.898093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:21:57.991631       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-236314_37c25550-526d-4c69-923c-0ab37098c4f0!
	
	
	==> storage-provisioner [ff5eeb3598d0ee0d8632ba6b2c43ba490782a9c06cdab6d790fbd85ba9094d8e] <==
	I1101 09:21:09.674243       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:21:39.678080       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-236314 -n embed-certs-236314
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-236314 -n embed-certs-236314: exit status 2 (451.694748ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-236314 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-236314
helpers_test.go:243: (dbg) docker inspect embed-certs-236314:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64",
	        "Created": "2025-11-01T09:19:56.919781471Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 256444,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:20:59.66866613Z",
	            "FinishedAt": "2025-11-01T09:20:58.750556681Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64/hosts",
	        "LogPath": "/var/lib/docker/containers/9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64/9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64-json.log",
	        "Name": "/embed-certs-236314",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-236314:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-236314",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9e1a1d18390353a69d0f0d9962510aadff6b6976ac58bf83ab99b25a52d19c64",
	                "LowerDir": "/var/lib/docker/overlay2/058db38a3e51e77a68a2911f27d674e0411b25d26e2fe50bb66959a3e62a7c04-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/058db38a3e51e77a68a2911f27d674e0411b25d26e2fe50bb66959a3e62a7c04/merged",
	                "UpperDir": "/var/lib/docker/overlay2/058db38a3e51e77a68a2911f27d674e0411b25d26e2fe50bb66959a3e62a7c04/diff",
	                "WorkDir": "/var/lib/docker/overlay2/058db38a3e51e77a68a2911f27d674e0411b25d26e2fe50bb66959a3e62a7c04/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-236314",
	                "Source": "/var/lib/docker/volumes/embed-certs-236314/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-236314",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-236314",
	                "name.minikube.sigs.k8s.io": "embed-certs-236314",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2757941023d9cf67183fa060e6ff1d75306699f398afbf89ce4bd002b69d1655",
	            "SandboxKey": "/var/run/docker/netns/2757941023d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-236314": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:37:5a:d3:4f:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2f536846b22cd19ee4958cff8ea6caf971d5b2fed6041edde3ccc625d2886d4f",
	                    "EndpointID": "a6b2dd490befa02bad0495d574ab23cf733c1a8cc81f831965f0f0f597b0a4b3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-236314",
	                        "9e1a1d183903"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-236314 -n embed-certs-236314
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-236314 -n embed-certs-236314: exit status 2 (456.398467ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-236314 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-236314 logs -n 25: (1.212373573s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-152344 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:19 UTC │
	│ start   │ -p old-k8s-version-152344 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:19 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p no-preload-397460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-236314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │                     │
	│ stop    │ -p embed-certs-236314 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-236314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p embed-certs-236314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:21 UTC │
	│ image   │ old-k8s-version-152344 image list --format=json                                                                                                                                                                                               │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p old-k8s-version-152344 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ image   │ no-preload-397460 image list --format=json                                                                                                                                                                                                    │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p no-preload-397460 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ delete  │ -p old-k8s-version-152344                                                                                                                                                                                                                     │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p old-k8s-version-152344                                                                                                                                                                                                                     │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p no-preload-397460                                                                                                                                                                                                                          │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p disable-driver-mounts-366530                                                                                                                                                                                                               │ disable-driver-mounts-366530 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ start   │ -p default-k8s-diff-port-648641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-648641 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p no-preload-397460                                                                                                                                                                                                                          │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ start   │ -p newest-cni-340756 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-340756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ stop    │ -p newest-cni-340756 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ image   │ embed-certs-236314 image list --format=json                                                                                                                                                                                                   │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p embed-certs-236314 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-340756 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ start   │ -p newest-cni-340756 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:21:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:21:59.659740  273527 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:21:59.659892  273527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:21:59.659899  273527 out.go:374] Setting ErrFile to fd 2...
	I1101 09:21:59.659905  273527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:21:59.660241  273527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:21:59.661210  273527 out.go:368] Setting JSON to false
	I1101 09:21:59.663112  273527 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3868,"bootTime":1761985052,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:21:59.663270  273527 start.go:143] virtualization: kvm guest
	I1101 09:21:59.666121  273527 out.go:179] * [newest-cni-340756] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:21:59.667721  273527 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:21:59.667800  273527 notify.go:221] Checking for updates...
	I1101 09:21:59.669529  273527 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:21:59.671051  273527 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:21:59.672482  273527 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:21:59.674036  273527 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:21:59.675485  273527 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:21:59.677472  273527 config.go:182] Loaded profile config "newest-cni-340756": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:59.678195  273527 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:21:59.711881  273527 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:21:59.711997  273527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:21:59.796757  273527 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 09:21:59.783182454 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:21:59.796919  273527 docker.go:319] overlay module found
	I1101 09:21:59.803327  273527 out.go:179] * Using the docker driver based on existing profile
	I1101 09:21:57.448518  216020 out.go:252]   - Booting up control plane ...
	I1101 09:21:57.448668  216020 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:21:57.448799  216020 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:21:57.448929  216020 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:21:57.464387  216020 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:21:57.464588  216020 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:21:57.472079  216020 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:21:57.472467  216020 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:21:57.472547  216020 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:21:57.589934  216020 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:21:57.590075  216020 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:21:58.091173  216020 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.402761ms
	I1101 09:21:58.094476  216020 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:21:58.094620  216020 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1101 09:21:58.094735  216020 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:21:58.094851  216020 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:21:59.804545  273527 start.go:309] selected driver: docker
	I1101 09:21:59.804565  273527 start.go:930] validating driver "docker" against &{Name:newest-cni-340756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-340756 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:21:59.804709  273527 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:21:59.806908  273527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:21:59.895156  273527 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 09:21:59.87713648 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:21:59.895540  273527 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:21:59.895575  273527 cni.go:84] Creating CNI manager for ""
	I1101 09:21:59.895645  273527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:21:59.895717  273527 start.go:353] cluster config:
	{Name:newest-cni-340756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-340756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:21:59.899006  273527 out.go:179] * Starting "newest-cni-340756" primary control-plane node in "newest-cni-340756" cluster
	I1101 09:21:59.900735  273527 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:21:59.903338  273527 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:21:59.905077  273527 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:21:59.905132  273527 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:21:59.905142  273527 cache.go:59] Caching tarball of preloaded images
	I1101 09:21:59.905291  273527 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:21:59.905536  273527 preload.go:233] Found /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:21:59.905591  273527 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:21:59.905769  273527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/config.json ...
	I1101 09:21:59.934340  273527 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:21:59.934376  273527 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:21:59.934393  273527 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:21:59.934424  273527 start.go:360] acquireMachinesLock for newest-cni-340756: {Name:mk88172481da3b8a8d740f548867bdcc84a2d863 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:21:59.934510  273527 start.go:364] duration metric: took 50.478µs to acquireMachinesLock for "newest-cni-340756"
	I1101 09:21:59.934529  273527 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:21:59.934535  273527 fix.go:54] fixHost starting: 
	I1101 09:21:59.934814  273527 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Status}}
	I1101 09:21:59.958384  273527 fix.go:112] recreateIfNeeded on newest-cni-340756: state=Stopped err=<nil>
	W1101 09:21:59.958431  273527 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 01 09:21:22 embed-certs-236314 crio[560]: time="2025-11-01T09:21:22.764911642Z" level=info msg="Created container ab9a1c2871ebfdf28b52214510e1799784842fc9e5a2a4f8ac62fa64668e5010: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vtj9p/kubernetes-dashboard" id=76035249-0966-41a3-92c1-1ec3c45ba712 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:21:22 embed-certs-236314 crio[560]: time="2025-11-01T09:21:22.765960596Z" level=info msg="Starting container: ab9a1c2871ebfdf28b52214510e1799784842fc9e5a2a4f8ac62fa64668e5010" id=66b39d30-dcd0-4573-97cb-edc26b67b88c name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:21:22 embed-certs-236314 crio[560]: time="2025-11-01T09:21:22.768527358Z" level=info msg="Started container" PID=1710 containerID=ab9a1c2871ebfdf28b52214510e1799784842fc9e5a2a4f8ac62fa64668e5010 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vtj9p/kubernetes-dashboard id=66b39d30-dcd0-4573-97cb-edc26b67b88c name=/runtime.v1.RuntimeService/StartContainer sandboxID=a3e0275e5ce4b52cf9c745e50b5d55c6ccc559a87ae59a82249bd37a745f6d55
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.290905939Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=01184b61-6da0-4251-84cf-5e172e8093f2 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.292146638Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cea464fc-9e08-4187-9022-a248febc38eb name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.293683727Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p/dashboard-metrics-scraper" id=a98e1bd8-b0ae-490b-a817-ae6aaacbc6d6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.293843662Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.301061939Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.301715895Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.350650395Z" level=info msg="Created container 97e232d23f29552301319ab346cf13e85b89566b637b24177cd78bdfb630fd2f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p/dashboard-metrics-scraper" id=a98e1bd8-b0ae-490b-a817-ae6aaacbc6d6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.351516772Z" level=info msg="Starting container: 97e232d23f29552301319ab346cf13e85b89566b637b24177cd78bdfb630fd2f" id=bdb3a04e-82a2-4bc2-8cb5-10c94ca06458 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.354257615Z" level=info msg="Started container" PID=1729 containerID=97e232d23f29552301319ab346cf13e85b89566b637b24177cd78bdfb630fd2f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p/dashboard-metrics-scraper id=bdb3a04e-82a2-4bc2-8cb5-10c94ca06458 name=/runtime.v1.RuntimeService/StartContainer sandboxID=140e0c3418f98a0793f2c6f68d4699316bf74c7bd5f51c10cfcabd622151687e
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.433465582Z" level=info msg="Removing container: c4cc50c8c91718cdd4a830baa14948c9260b14397567a66243b02ec45b341317" id=2eadbd93-8ac5-4af1-b722-ce57065d8ced name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:21:39 embed-certs-236314 crio[560]: time="2025-11-01T09:21:39.452573773Z" level=info msg="Removed container c4cc50c8c91718cdd4a830baa14948c9260b14397567a66243b02ec45b341317: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p/dashboard-metrics-scraper" id=2eadbd93-8ac5-4af1-b722-ce57065d8ced name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.432955431Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ede4e327-cad3-4f0a-8dae-47a8cb99730b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.433937954Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fb61cea0-478c-4dd4-8e2f-085d12935c14 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.435088398Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ea3fe609-9058-45f5-8983-267ec8c3e5a9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.435222827Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.439536824Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.439695759Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/cc0de986ca118965e38efc0e6848f19f6137ed66f7ca9a8754d32a7a44524164/merged/etc/passwd: no such file or directory"
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.439720388Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/cc0de986ca118965e38efc0e6848f19f6137ed66f7ca9a8754d32a7a44524164/merged/etc/group: no such file or directory"
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.439982305Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.46561482Z" level=info msg="Created container 6c99ae25ef0e9393dddf231085bca13268e9d35c7587c2535d9874ef0b8bc855: kube-system/storage-provisioner/storage-provisioner" id=ea3fe609-9058-45f5-8983-267ec8c3e5a9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.466271061Z" level=info msg="Starting container: 6c99ae25ef0e9393dddf231085bca13268e9d35c7587c2535d9874ef0b8bc855" id=7dd966a8-c5fa-4f6e-b813-2e9d852bd295 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:21:40 embed-certs-236314 crio[560]: time="2025-11-01T09:21:40.468486867Z" level=info msg="Started container" PID=1746 containerID=6c99ae25ef0e9393dddf231085bca13268e9d35c7587c2535d9874ef0b8bc855 description=kube-system/storage-provisioner/storage-provisioner id=7dd966a8-c5fa-4f6e-b813-2e9d852bd295 name=/runtime.v1.RuntimeService/StartContainer sandboxID=00ef337336f28a6b16d03c1f8a448ce269ca75121c5dd095a114322e5ecf1816
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	6c99ae25ef0e9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   00ef337336f28       storage-provisioner                          kube-system
	97e232d23f295       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   140e0c3418f98       dashboard-metrics-scraper-6ffb444bf9-zt69p   kubernetes-dashboard
	ab9a1c2871ebf       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   a3e0275e5ce4b       kubernetes-dashboard-855c9754f9-vtj9p        kubernetes-dashboard
	12559a21df4b0       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   04907abdd6503       busybox                                      default
	8d11e282bdb58       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   60f3b575e0bd4       coredns-66bc5c9577-wwvth                     kube-system
	c76b1cf0e992c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   a95572f26ed91       kindnet-mf8mj                                kube-system
	ff5eeb3598d0e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   00ef337336f28       storage-provisioner                          kube-system
	cf0921be2c864       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           51 seconds ago      Running             kube-proxy                  0                   875d6e71db9c1       kube-proxy-55ft8                             kube-system
	bca3056e43561       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   b45a69f6f08e2       etcd-embed-certs-236314                      kube-system
	cdf866b372073       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   95521059ca665       kube-scheduler-embed-certs-236314            kube-system
	c53066ca825ef       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   1b3862d595106       kube-controller-manager-embed-certs-236314   kube-system
	63c22508cf705       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   1d2cdfe596f49       kube-apiserver-embed-certs-236314            kube-system
	
	
	==> coredns [8d11e282bdb581fc10907660c4ed84334e43ee3c72fbd91f47dfa5bd7fadf948] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34944 - 31645 "HINFO IN 3901251451262326739.5616493826336659462. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.07433026s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-236314
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-236314
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=embed-certs-236314
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_20_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:20:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-236314
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:21:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:21:49 +0000   Sat, 01 Nov 2025 09:20:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:21:49 +0000   Sat, 01 Nov 2025 09:20:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:21:49 +0000   Sat, 01 Nov 2025 09:20:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:21:49 +0000   Sat, 01 Nov 2025 09:20:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-236314
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                dee9e247-3614-413a-be45-584e8f9ead09
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-wwvth                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-embed-certs-236314                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-mf8mj                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-embed-certs-236314             250m (3%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-controller-manager-embed-certs-236314    200m (2%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-55ft8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-embed-certs-236314             100m (1%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zt69p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vtj9p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  113s (x8 over 113s)  kubelet          Node embed-certs-236314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x8 over 113s)  kubelet          Node embed-certs-236314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x8 over 113s)  kubelet          Node embed-certs-236314 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    108s                 kubelet          Node embed-certs-236314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  108s                 kubelet          Node embed-certs-236314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     108s                 kubelet          Node embed-certs-236314 status is now: NodeHasSufficientPID
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node embed-certs-236314 event: Registered Node embed-certs-236314 in Controller
	  Normal  NodeReady                92s                  kubelet          Node embed-certs-236314 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node embed-certs-236314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node embed-certs-236314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node embed-certs-236314 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                  node-controller  Node embed-certs-236314 event: Registered Node embed-certs-236314 in Controller
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.047726] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[Nov 1 08:38] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.215674] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.047939] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000023] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023915] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023867] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023880] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000033] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023884] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +4.031524] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +8.255123] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	
	
	==> etcd [bca3056e4356124989f2b2cba8377cf3f660970574583fcca877cb776005e6ca] <==
	{"level":"warn","ts":"2025-11-01T09:21:08.205130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.211269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.217359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.224507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.239336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.246649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.253564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.261516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.269760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.278195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.287303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.304042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.312361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.326177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.330330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.339060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.347137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:08.406294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49144","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:21:16.080441Z","caller":"traceutil/trace.go:172","msg":"trace[671642900] transaction","detail":"{read_only:false; response_revision:543; number_of_response:1; }","duration":"134.696902ms","start":"2025-11-01T09:21:15.945721Z","end":"2025-11-01T09:21:16.080418Z","steps":["trace[671642900] 'process raft request'  (duration: 134.576372ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:21:16.302319Z","caller":"traceutil/trace.go:172","msg":"trace[2109263290] transaction","detail":"{read_only:false; response_revision:545; number_of_response:1; }","duration":"124.737748ms","start":"2025-11-01T09:21:16.177560Z","end":"2025-11-01T09:21:16.302297Z","steps":["trace[2109263290] 'process raft request'  (duration: 123.585591ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:21:21.029257Z","caller":"traceutil/trace.go:172","msg":"trace[188115137] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"174.299461ms","start":"2025-11-01T09:21:20.854940Z","end":"2025-11-01T09:21:21.029239Z","steps":["trace[188115137] 'process raft request'  (duration: 161.273591ms)","trace[188115137] 'compare'  (duration: 12.923317ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:21:22.978394Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.303584ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356350263845855 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-csrf\" mod_revision:491 > success:<request_put:<key:\"/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-csrf\" value_size:1272 >> failure:<request_range:<key:\"/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-csrf\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T09:21:22.978608Z","caller":"traceutil/trace.go:172","msg":"trace[1497868239] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"124.201304ms","start":"2025-11-01T09:21:22.854391Z","end":"2025-11-01T09:21:22.978592Z","steps":["trace[1497868239] 'process raft request'  (duration: 124.122037ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:21:22.978616Z","caller":"traceutil/trace.go:172","msg":"trace[2133583357] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"141.516671ms","start":"2025-11-01T09:21:22.837087Z","end":"2025-11-01T09:21:22.978604Z","steps":["trace[2133583357] 'process raft request'  (duration: 14.411562ms)","trace[2133583357] 'compare'  (duration: 126.164512ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:21:23.359409Z","caller":"traceutil/trace.go:172","msg":"trace[693850273] transaction","detail":"{read_only:false; response_revision:626; number_of_response:1; }","duration":"130.458251ms","start":"2025-11-01T09:21:23.228930Z","end":"2025-11-01T09:21:23.359388Z","steps":["trace[693850273] 'process raft request'  (duration: 130.312652ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:22:01 up  1:04,  0 user,  load average: 3.90, 2.90, 1.73
	Linux embed-certs-236314 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c76b1cf0e992cc091c5557f5c0067cc245d9e9be10f9683721fbc495f757f1dd] <==
	I1101 09:21:09.819968       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:21:09.820274       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 09:21:09.820480       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:21:09.820503       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:21:09.820532       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:21:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:21:10.117476       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:21:10.117576       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:21:10.117625       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:21:10.117815       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:21:10.413558       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:21:10.413625       1 metrics.go:72] Registering metrics
	I1101 09:21:10.413745       1 controller.go:711] "Syncing nftables rules"
	I1101 09:21:20.118008       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:21:20.118144       1 main.go:301] handling current node
	I1101 09:21:30.120952       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:21:30.121025       1 main.go:301] handling current node
	I1101 09:21:40.117567       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:21:40.117607       1 main.go:301] handling current node
	I1101 09:21:50.117906       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:21:50.117940       1 main.go:301] handling current node
	I1101 09:22:00.125945       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:22:00.126008       1 main.go:301] handling current node
	
	
	==> kube-apiserver [63c22508cf7059b3b3f3d3dca5c0c8bae9ba37801ed8914d301b3b69f0fc7f4d] <==
	I1101 09:21:08.931746       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:21:08.931782       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:21:08.931813       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:21:08.931823       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:21:08.931828       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:21:08.931834       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:21:08.932088       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 09:21:08.932116       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:21:08.932135       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:21:08.932161       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:21:08.932174       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:21:08.933786       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:21:08.940888       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:21:08.970613       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:21:09.229890       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:21:09.266847       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:21:09.291346       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:21:09.302486       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:21:09.312758       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:21:09.381103       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.69.19"}
	I1101 09:21:09.405587       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.189.3"}
	I1101 09:21:09.835629       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:21:12.268831       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:21:12.617536       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:21:12.767467       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c53066ca825ef150c1b3480d4c681c275883620b56bfc97b3e50480bdd6dc761] <==
	I1101 09:21:12.256491       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:21:12.259787       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:21:12.261925       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:21:12.263175       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:21:12.263210       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:21:12.263240       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:21:12.263258       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:21:12.263299       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:21:12.263244       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:21:12.263643       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:21:12.264101       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:21:12.264270       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:21:12.264383       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:21:12.264481       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-236314"
	I1101 09:21:12.264527       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:21:12.264653       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:21:12.268201       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:21:12.272332       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:21:12.285578       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:21:12.285658       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:21:12.285707       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:21:12.285714       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:21:12.285719       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:21:12.288926       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:21:12.295138       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [cf0921be2c864b0ad5e89bbcde93cfdeb7214cf2e8fbeeb40447ed91e7d93636] <==
	I1101 09:21:09.708756       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:21:09.762813       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:21:09.863441       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:21:09.863679       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 09:21:09.863797       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:21:09.894147       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:21:09.894233       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:21:09.901928       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:21:09.902471       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:21:09.902562       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:21:09.906960       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:21:09.907173       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:21:09.907056       1 config.go:200] "Starting service config controller"
	I1101 09:21:09.907598       1 config.go:309] "Starting node config controller"
	I1101 09:21:09.907638       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:21:09.907612       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:21:09.907851       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:21:09.907891       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:21:10.008613       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:21:10.008645       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:21:10.008729       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:21:10.008762       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cdf866b372073a7755ed447cdf8634d89a5c22e16db02cc9cfe7c76643d51a6c] <==
	I1101 09:21:07.638181       1 serving.go:386] Generated self-signed cert in-memory
	W1101 09:21:08.851839       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:21:08.851903       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:21:08.851916       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:21:08.851926       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:21:08.903734       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:21:08.903764       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:21:08.906674       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:21:08.906715       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:21:08.907217       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:21:08.907260       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:21:09.006985       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:21:11 embed-certs-236314 kubelet[715]: I1101 09:21:11.627564     715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 09:21:13 embed-certs-236314 kubelet[715]: I1101 09:21:13.011262     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e4c34dbc-f680-46c7-92ea-18532ff6d5f0-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-vtj9p\" (UID: \"e4c34dbc-f680-46c7-92ea-18532ff6d5f0\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vtj9p"
	Nov 01 09:21:13 embed-certs-236314 kubelet[715]: I1101 09:21:13.011328     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m52hh\" (UniqueName: \"kubernetes.io/projected/e4c34dbc-f680-46c7-92ea-18532ff6d5f0-kube-api-access-m52hh\") pod \"kubernetes-dashboard-855c9754f9-vtj9p\" (UID: \"e4c34dbc-f680-46c7-92ea-18532ff6d5f0\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vtj9p"
	Nov 01 09:21:13 embed-certs-236314 kubelet[715]: I1101 09:21:13.011360     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2a276c9f-90da-419c-847f-6569a61c530c-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-zt69p\" (UID: \"2a276c9f-90da-419c-847f-6569a61c530c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p"
	Nov 01 09:21:13 embed-certs-236314 kubelet[715]: I1101 09:21:13.011497     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f6dp\" (UniqueName: \"kubernetes.io/projected/2a276c9f-90da-419c-847f-6569a61c530c-kube-api-access-4f6dp\") pod \"dashboard-metrics-scraper-6ffb444bf9-zt69p\" (UID: \"2a276c9f-90da-419c-847f-6569a61c530c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p"
	Nov 01 09:21:17 embed-certs-236314 kubelet[715]: I1101 09:21:17.356664     715 scope.go:117] "RemoveContainer" containerID="a15942aa21a01eaffc764dcad49f99cdbf20b7627734f2a9353411b6d3a0345e"
	Nov 01 09:21:18 embed-certs-236314 kubelet[715]: I1101 09:21:18.363264     715 scope.go:117] "RemoveContainer" containerID="a15942aa21a01eaffc764dcad49f99cdbf20b7627734f2a9353411b6d3a0345e"
	Nov 01 09:21:18 embed-certs-236314 kubelet[715]: I1101 09:21:18.363846     715 scope.go:117] "RemoveContainer" containerID="c4cc50c8c91718cdd4a830baa14948c9260b14397567a66243b02ec45b341317"
	Nov 01 09:21:18 embed-certs-236314 kubelet[715]: E1101 09:21:18.364124     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zt69p_kubernetes-dashboard(2a276c9f-90da-419c-847f-6569a61c530c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p" podUID="2a276c9f-90da-419c-847f-6569a61c530c"
	Nov 01 09:21:19 embed-certs-236314 kubelet[715]: I1101 09:21:19.368508     715 scope.go:117] "RemoveContainer" containerID="c4cc50c8c91718cdd4a830baa14948c9260b14397567a66243b02ec45b341317"
	Nov 01 09:21:19 embed-certs-236314 kubelet[715]: E1101 09:21:19.368758     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zt69p_kubernetes-dashboard(2a276c9f-90da-419c-847f-6569a61c530c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p" podUID="2a276c9f-90da-419c-847f-6569a61c530c"
	Nov 01 09:21:23 embed-certs-236314 kubelet[715]: I1101 09:21:23.395546     715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vtj9p" podStartSLOduration=2.233380409 podStartE2EDuration="11.395518274s" podCreationTimestamp="2025-11-01 09:21:12 +0000 UTC" firstStartedPulling="2025-11-01 09:21:13.403418027 +0000 UTC m=+7.231927342" lastFinishedPulling="2025-11-01 09:21:22.565555911 +0000 UTC m=+16.394065207" observedRunningTime="2025-11-01 09:21:23.39458662 +0000 UTC m=+17.223095935" watchObservedRunningTime="2025-11-01 09:21:23.395518274 +0000 UTC m=+17.224027594"
	Nov 01 09:21:26 embed-certs-236314 kubelet[715]: I1101 09:21:26.303639     715 scope.go:117] "RemoveContainer" containerID="c4cc50c8c91718cdd4a830baa14948c9260b14397567a66243b02ec45b341317"
	Nov 01 09:21:26 embed-certs-236314 kubelet[715]: E1101 09:21:26.303847     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zt69p_kubernetes-dashboard(2a276c9f-90da-419c-847f-6569a61c530c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p" podUID="2a276c9f-90da-419c-847f-6569a61c530c"
	Nov 01 09:21:39 embed-certs-236314 kubelet[715]: I1101 09:21:39.290240     715 scope.go:117] "RemoveContainer" containerID="c4cc50c8c91718cdd4a830baa14948c9260b14397567a66243b02ec45b341317"
	Nov 01 09:21:39 embed-certs-236314 kubelet[715]: I1101 09:21:39.428063     715 scope.go:117] "RemoveContainer" containerID="c4cc50c8c91718cdd4a830baa14948c9260b14397567a66243b02ec45b341317"
	Nov 01 09:21:39 embed-certs-236314 kubelet[715]: I1101 09:21:39.428354     715 scope.go:117] "RemoveContainer" containerID="97e232d23f29552301319ab346cf13e85b89566b637b24177cd78bdfb630fd2f"
	Nov 01 09:21:39 embed-certs-236314 kubelet[715]: E1101 09:21:39.428542     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zt69p_kubernetes-dashboard(2a276c9f-90da-419c-847f-6569a61c530c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p" podUID="2a276c9f-90da-419c-847f-6569a61c530c"
	Nov 01 09:21:40 embed-certs-236314 kubelet[715]: I1101 09:21:40.432490     715 scope.go:117] "RemoveContainer" containerID="ff5eeb3598d0ee0d8632ba6b2c43ba490782a9c06cdab6d790fbd85ba9094d8e"
	Nov 01 09:21:46 embed-certs-236314 kubelet[715]: I1101 09:21:46.304344     715 scope.go:117] "RemoveContainer" containerID="97e232d23f29552301319ab346cf13e85b89566b637b24177cd78bdfb630fd2f"
	Nov 01 09:21:46 embed-certs-236314 kubelet[715]: E1101 09:21:46.304682     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zt69p_kubernetes-dashboard(2a276c9f-90da-419c-847f-6569a61c530c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zt69p" podUID="2a276c9f-90da-419c-847f-6569a61c530c"
	Nov 01 09:21:55 embed-certs-236314 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:21:55 embed-certs-236314 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:21:55 embed-certs-236314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:21:55 embed-certs-236314 systemd[1]: kubelet.service: Consumed 1.769s CPU time.
	
	
	==> kubernetes-dashboard [ab9a1c2871ebfdf28b52214510e1799784842fc9e5a2a4f8ac62fa64668e5010] <==
	2025/11/01 09:21:22 Using namespace: kubernetes-dashboard
	2025/11/01 09:21:22 Using in-cluster config to connect to apiserver
	2025/11/01 09:21:22 Using secret token for csrf signing
	2025/11/01 09:21:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:21:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:21:22 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:21:22 Generating JWE encryption key
	2025/11/01 09:21:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:21:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:21:23 Initializing JWE encryption key from synchronized object
	2025/11/01 09:21:23 Creating in-cluster Sidecar client
	2025/11/01 09:21:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:21:23 Serving insecurely on HTTP port: 9090
	2025/11/01 09:21:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:21:22 Starting overwatch
	
	
	==> storage-provisioner [6c99ae25ef0e9393dddf231085bca13268e9d35c7587c2535d9874ef0b8bc855] <==
	I1101 09:21:40.481213       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:21:40.488672       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:21:40.488711       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:21:40.491118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:43.948936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:48.209610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:51.807968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:54.862316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:57.884737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:57.890964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:21:57.891137       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:21:57.891328       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-236314_37c25550-526d-4c69-923c-0ab37098c4f0!
	I1101 09:21:57.891358       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c262ede2-8436-41d8-b457-ce72d2fd66a5", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-236314_37c25550-526d-4c69-923c-0ab37098c4f0 became leader
	W1101 09:21:57.893563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:57.898093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:21:57.991631       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-236314_37c25550-526d-4c69-923c-0ab37098c4f0!
	W1101 09:21:59.905418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:59.914943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ff5eeb3598d0ee0d8632ba6b2c43ba490782a9c06cdab6d790fbd85ba9094d8e] <==
	I1101 09:21:09.674243       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:21:39.678080       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-236314 -n embed-certs-236314
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-236314 -n embed-certs-236314: exit status 2 (376.125996ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-236314 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-648641 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-648641 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (282.39622ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:22:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-648641 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-648641 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-648641 describe deploy/metrics-server -n kube-system: exit status 1 (65.496365ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-648641 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-648641
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-648641:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53",
	        "Created": "2025-11-01T09:21:16.622802953Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 263917,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:21:16.856288315Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53/hostname",
	        "HostsPath": "/var/lib/docker/containers/57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53/hosts",
	        "LogPath": "/var/lib/docker/containers/57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53/57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53-json.log",
	        "Name": "/default-k8s-diff-port-648641",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-648641:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-648641",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53",
	                "LowerDir": "/var/lib/docker/overlay2/5e7c7f3822b950cf98e6234ac809850a021b136b26905d554019d5f32326262b-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e7c7f3822b950cf98e6234ac809850a021b136b26905d554019d5f32326262b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e7c7f3822b950cf98e6234ac809850a021b136b26905d554019d5f32326262b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e7c7f3822b950cf98e6234ac809850a021b136b26905d554019d5f32326262b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-648641",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-648641/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-648641",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-648641",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-648641",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0cc19ba0eac2be425788d7a05b5b4c5f7ca9a636938ba3bb2f94b2f8cafa8d59",
	            "SandboxKey": "/var/run/docker/netns/0cc19ba0eac2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-648641": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:06:09:4e:6e:e8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7a970666a21b0480d187f349d9b6ff5e5ba4999bec31b90faf658b9146692b6b",
	                    "EndpointID": "d83030b55ba0cfff41aca2899bb01b9ed7f23ea2ccca626ed91d554e04d2f665",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-648641",
	                        "57e212cd292e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-648641 -n default-k8s-diff-port-648641
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-648641 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-648641 logs -n 25: (1.344176405s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-397460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-236314 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │                     │
	│ stop    │ -p embed-certs-236314 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-236314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:20 UTC │
	│ start   │ -p embed-certs-236314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:21 UTC │
	│ image   │ old-k8s-version-152344 image list --format=json                                                                                                                                                                                               │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p old-k8s-version-152344 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ image   │ no-preload-397460 image list --format=json                                                                                                                                                                                                    │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p no-preload-397460 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ delete  │ -p old-k8s-version-152344                                                                                                                                                                                                                     │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p old-k8s-version-152344                                                                                                                                                                                                                     │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p no-preload-397460                                                                                                                                                                                                                          │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p disable-driver-mounts-366530                                                                                                                                                                                                               │ disable-driver-mounts-366530 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ start   │ -p default-k8s-diff-port-648641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-648641 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p no-preload-397460                                                                                                                                                                                                                          │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ start   │ -p newest-cni-340756 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-340756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ stop    │ -p newest-cni-340756 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ image   │ embed-certs-236314 image list --format=json                                                                                                                                                                                                   │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p embed-certs-236314 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-340756 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ start   │ -p newest-cni-340756 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ delete  │ -p embed-certs-236314                                                                                                                                                                                                                         │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-648641 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-648641 │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:21:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:21:59.659740  273527 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:21:59.659892  273527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:21:59.659899  273527 out.go:374] Setting ErrFile to fd 2...
	I1101 09:21:59.659905  273527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:21:59.660241  273527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:21:59.661210  273527 out.go:368] Setting JSON to false
	I1101 09:21:59.663112  273527 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3868,"bootTime":1761985052,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:21:59.663270  273527 start.go:143] virtualization: kvm guest
	I1101 09:21:59.666121  273527 out.go:179] * [newest-cni-340756] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:21:59.667721  273527 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:21:59.667800  273527 notify.go:221] Checking for updates...
	I1101 09:21:59.669529  273527 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:21:59.671051  273527 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:21:59.672482  273527 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:21:59.674036  273527 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:21:59.675485  273527 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:21:59.677472  273527 config.go:182] Loaded profile config "newest-cni-340756": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:21:59.678195  273527 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:21:59.711881  273527 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:21:59.711997  273527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:21:59.796757  273527 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 09:21:59.783182454 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:21:59.796919  273527 docker.go:319] overlay module found
	I1101 09:21:59.803327  273527 out.go:179] * Using the docker driver based on existing profile
	I1101 09:21:57.448518  216020 out.go:252]   - Booting up control plane ...
	I1101 09:21:57.448668  216020 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:21:57.448799  216020 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:21:57.448929  216020 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:21:57.464387  216020 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:21:57.464588  216020 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:21:57.472079  216020 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:21:57.472467  216020 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:21:57.472547  216020 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:21:57.589934  216020 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:21:57.590075  216020 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:21:58.091173  216020 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.402761ms
	I1101 09:21:58.094476  216020 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:21:58.094620  216020 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1101 09:21:58.094735  216020 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:21:58.094851  216020 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:21:59.804545  273527 start.go:309] selected driver: docker
	I1101 09:21:59.804565  273527 start.go:930] validating driver "docker" against &{Name:newest-cni-340756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-340756 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:21:59.804709  273527 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:21:59.806908  273527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:21:59.895156  273527 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 09:21:59.87713648 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:21:59.895540  273527 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:21:59.895575  273527 cni.go:84] Creating CNI manager for ""
	I1101 09:21:59.895645  273527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:21:59.895717  273527 start.go:353] cluster config:
	{Name:newest-cni-340756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-340756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:21:59.899006  273527 out.go:179] * Starting "newest-cni-340756" primary control-plane node in "newest-cni-340756" cluster
	I1101 09:21:59.900735  273527 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:21:59.903338  273527 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:21:59.905077  273527 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:21:59.905132  273527 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:21:59.905142  273527 cache.go:59] Caching tarball of preloaded images
	I1101 09:21:59.905291  273527 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:21:59.905536  273527 preload.go:233] Found /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:21:59.905591  273527 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:21:59.905769  273527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/config.json ...
	I1101 09:21:59.934340  273527 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:21:59.934376  273527 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:21:59.934393  273527 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:21:59.934424  273527 start.go:360] acquireMachinesLock for newest-cni-340756: {Name:mk88172481da3b8a8d740f548867bdcc84a2d863 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:21:59.934510  273527 start.go:364] duration metric: took 50.478µs to acquireMachinesLock for "newest-cni-340756"
	I1101 09:21:59.934529  273527 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:21:59.934535  273527 fix.go:54] fixHost starting: 
	I1101 09:21:59.934814  273527 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Status}}
	I1101 09:21:59.958384  273527 fix.go:112] recreateIfNeeded on newest-cni-340756: state=Stopped err=<nil>
	W1101 09:21:59.958431  273527 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:22:00.063799  216020 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.968675826s
	I1101 09:22:00.954760  216020 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.860169305s
	I1101 09:22:02.596576  216020 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502045351s
	I1101 09:22:02.609117  216020 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:22:02.621450  216020 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:22:02.634128  216020 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:22:02.634421  216020 kubeadm.go:319] [mark-control-plane] Marking the node kubernetes-upgrade-846924 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:22:02.644990  216020 kubeadm.go:319] [bootstrap-token] Using token: jmbu6j.onhrx0oai5vz4ft0
	I1101 09:22:02.646741  216020 out.go:252]   - Configuring RBAC rules ...
	I1101 09:22:02.646929  216020 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:22:02.650667  216020 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:22:02.658549  216020 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:22:02.661997  216020 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:22:02.666484  216020 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:22:02.669855  216020 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:22:03.002925  216020 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:22:03.422671  216020 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:22:04.004827  216020 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:22:04.006156  216020 kubeadm.go:319] 
	I1101 09:22:04.006299  216020 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:22:04.006323  216020 kubeadm.go:319] 
	I1101 09:22:04.006441  216020 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:22:04.006457  216020 kubeadm.go:319] 
	I1101 09:22:04.006506  216020 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:22:04.006603  216020 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:22:04.006716  216020 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:22:04.006734  216020 kubeadm.go:319] 
	I1101 09:22:04.006819  216020 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:22:04.006828  216020 kubeadm.go:319] 
	I1101 09:22:04.006951  216020 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:22:04.006963  216020 kubeadm.go:319] 
	I1101 09:22:04.007043  216020 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:22:04.007151  216020 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:22:04.007258  216020 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:22:04.007279  216020 kubeadm.go:319] 
	I1101 09:22:04.007404  216020 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:22:04.007539  216020 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:22:04.007553  216020 kubeadm.go:319] 
	I1101 09:22:04.007668  216020 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jmbu6j.onhrx0oai5vz4ft0 \
	I1101 09:22:04.007806  216020 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 \
	I1101 09:22:04.007843  216020 kubeadm.go:319] 	--control-plane 
	I1101 09:22:04.007852  216020 kubeadm.go:319] 
	I1101 09:22:04.008039  216020 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:22:04.008055  216020 kubeadm.go:319] 
	I1101 09:22:04.008182  216020 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jmbu6j.onhrx0oai5vz4ft0 \
	I1101 09:22:04.008331  216020 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:77adef89518789e9abc276228bed68518bce4d031c33004dbbcc2105ab5e0752 
	I1101 09:22:04.011274  216020 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 09:22:04.011425  216020 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:22:04.011461  216020 cni.go:84] Creating CNI manager for ""
	I1101 09:22:04.011470  216020 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:22:04.013812  216020 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:22:04.015129  216020 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:22:04.020627  216020 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:22:04.020650  216020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:22:04.038131  216020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:22:04.311975  216020 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:22:04.312047  216020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:22:04.312059  216020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubernetes-upgrade-846924 minikube.k8s.io/updated_at=2025_11_01T09_22_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=kubernetes-upgrade-846924 minikube.k8s.io/primary=true
	I1101 09:22:04.432086  216020 kubeadm.go:1114] duration metric: took 120.10733ms to wait for elevateKubeSystemPrivileges
	I1101 09:22:04.432132  216020 ops.go:34] apiserver oom_adj: -16
	I1101 09:22:04.432144  216020 kubeadm.go:403] duration metric: took 4m13.303568084s to StartCluster
	I1101 09:22:04.432164  216020 settings.go:142] acquiring lock: {Name:mkb1ba7d0d4bb15f3f0746ce486d72703f901580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:22:04.432236  216020 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:22:04.433881  216020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:22:04.434194  216020 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:22:04.434283  216020 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:22:04.434372  216020 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-846924"
	I1101 09:22:04.434386  216020 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-846924"
	W1101 09:22:04.434397  216020 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:22:04.434401  216020 config.go:182] Loaded profile config "kubernetes-upgrade-846924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:22:04.434418  216020 host.go:66] Checking if "kubernetes-upgrade-846924" exists ...
	I1101 09:22:04.434443  216020 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-846924"
	I1101 09:22:04.434471  216020 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-846924"
	I1101 09:22:04.434777  216020 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-846924 --format={{.State.Status}}
	I1101 09:22:04.434879  216020 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-846924 --format={{.State.Status}}
	I1101 09:22:04.437300  216020 out.go:179] * Verifying Kubernetes components...
	I1101 09:22:04.439349  216020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:22:04.464364  216020 kapi.go:59] client config for kubernetes-upgrade-846924: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924/client.crt", KeyFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924/client.key", CAFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:22:04.464791  216020 addons.go:239] Setting addon default-storageclass=true in "kubernetes-upgrade-846924"
	W1101 09:22:04.464814  216020 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:22:04.464847  216020 host.go:66] Checking if "kubernetes-upgrade-846924" exists ...
	I1101 09:22:04.465290  216020 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Nov 01 09:21:53 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:53.429051229Z" level=info msg="Starting container: c95374008241f20b2e5c9a1df4621acbd78eb0470a7c9c06614ae769d95dda12" id=9d69b346-203e-41d4-ba10-49c428154acc name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:21:53 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:53.431458585Z" level=info msg="Started container" PID=1844 containerID=c95374008241f20b2e5c9a1df4621acbd78eb0470a7c9c06614ae769d95dda12 description=kube-system/coredns-66bc5c9577-nwj2s/coredns id=9d69b346-203e-41d4-ba10-49c428154acc name=/runtime.v1.RuntimeService/StartContainer sandboxID=5f2f48ea956e448d6fe57d0da0e5bc250f902cee278d6524fb6ecbfbf5b39222
	Nov 01 09:21:55 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:55.868927064Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ebf7209c-5527-4edc-ada4-f5b3e4fd0c07 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:21:55 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:55.869048396Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:55 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:55.874478727Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3a656852736aaa8cdc0a3ea8f97d55181edd3cd643e4b9355f0456477ebd0c10 UID:99fe2b46-4570-4a28-91ed-cea90f970719 NetNS:/var/run/netns/0fc88f7d-799d-4fa9-b81e-25311dd85eef Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00015c3d8}] Aliases:map[]}"
	Nov 01 09:21:55 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:55.874509944Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:21:55 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:55.884908373Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3a656852736aaa8cdc0a3ea8f97d55181edd3cd643e4b9355f0456477ebd0c10 UID:99fe2b46-4570-4a28-91ed-cea90f970719 NetNS:/var/run/netns/0fc88f7d-799d-4fa9-b81e-25311dd85eef Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00015c3d8}] Aliases:map[]}"
	Nov 01 09:21:55 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:55.88510407Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 09:21:55 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:55.885896267Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:21:55 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:55.886792114Z" level=info msg="Ran pod sandbox 3a656852736aaa8cdc0a3ea8f97d55181edd3cd643e4b9355f0456477ebd0c10 with infra container: default/busybox/POD" id=ebf7209c-5527-4edc-ada4-f5b3e4fd0c07 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:21:55 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:55.888110382Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c9846ab0-6cac-458e-9244-48ededf596c0 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:21:55 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:55.88821893Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c9846ab0-6cac-458e-9244-48ededf596c0 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:21:55 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:55.888247259Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c9846ab0-6cac-458e-9244-48ededf596c0 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:21:55 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:55.889051014Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=213e4f74-b236-4539-a09f-b7d212147e04 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:21:55 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:55.893145715Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 09:21:56 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:56.667810914Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=213e4f74-b236-4539-a09f-b7d212147e04 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:21:56 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:56.668779183Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bb94ee9d-5e6d-40e8-bf8e-502463ed38ea name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:21:56 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:56.670236496Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5aaf9ebf-b761-4fd0-8a0f-95e8d7eaccbe name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:21:56 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:56.673840982Z" level=info msg="Creating container: default/busybox/busybox" id=7c6ea0e9-b37c-4088-bd2e-53373c00fcb5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:21:56 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:56.674011757Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:56 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:56.677620252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:56 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:56.678075482Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:21:56 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:56.711599534Z" level=info msg="Created container cd03995485a7df38ceac4dbfd1b93c313d3269a270814e34279943611be5f880: default/busybox/busybox" id=7c6ea0e9-b37c-4088-bd2e-53373c00fcb5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:21:56 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:56.712968958Z" level=info msg="Starting container: cd03995485a7df38ceac4dbfd1b93c313d3269a270814e34279943611be5f880" id=857c7095-4e68-4bdb-bb03-565abf8b78f9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:21:56 default-k8s-diff-port-648641 crio[770]: time="2025-11-01T09:21:56.715060037Z" level=info msg="Started container" PID=1922 containerID=cd03995485a7df38ceac4dbfd1b93c313d3269a270814e34279943611be5f880 description=default/busybox/busybox id=857c7095-4e68-4bdb-bb03-565abf8b78f9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a656852736aaa8cdc0a3ea8f97d55181edd3cd643e4b9355f0456477ebd0c10
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	cd03995485a7d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   3a656852736aa       busybox                                                default
	c95374008241f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   5f2f48ea956e4       coredns-66bc5c9577-nwj2s                               kube-system
	11fa4e209ea0d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   437b72d359080       storage-provisioner                                    kube-system
	3958d8d28d261       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   1a9ba73fc3452       kindnet-fr9cg                                          kube-system
	acb575e89c7e9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   2b974e9e67c55       kube-proxy-nwrt4                                       kube-system
	0daab395cede2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   2bc8bd860f57f       kube-apiserver-default-k8s-diff-port-648641            kube-system
	1ad6cae644143       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   bd297f4d0227b       etcd-default-k8s-diff-port-648641                      kube-system
	e5806b07991d2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   dc3dc1a36f489       kube-controller-manager-default-k8s-diff-port-648641   kube-system
	6fb0d18953662       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   6eb2a1626738b       kube-scheduler-default-k8s-diff-port-648641            kube-system
	
	
	==> coredns [c95374008241f20b2e5c9a1df4621acbd78eb0470a7c9c06614ae769d95dda12] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44619 - 23404 "HINFO IN 4411641530146762938.4690019580173939385. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.074812145s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-648641
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-648641
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=default-k8s-diff-port-648641
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_21_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:21:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-648641
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:21:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:21:53 +0000   Sat, 01 Nov 2025 09:21:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:21:53 +0000   Sat, 01 Nov 2025 09:21:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:21:53 +0000   Sat, 01 Nov 2025 09:21:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:21:53 +0000   Sat, 01 Nov 2025 09:21:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-648641
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                62320ade-1784-4153-9303-00914bb09bcc
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-nwj2s                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-648641                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-fr9cg                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-648641             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-648641    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-nwrt4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-648641             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node default-k8s-diff-port-648641 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node default-k8s-diff-port-648641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node default-k8s-diff-port-648641 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-648641 event: Registered Node default-k8s-diff-port-648641 in Controller
	  Normal  NodeReady                12s   kubelet          Node default-k8s-diff-port-648641 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.047726] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[Nov 1 08:38] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.215674] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.047939] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000023] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023915] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023867] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023880] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000033] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023884] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +4.031524] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +8.255123] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	
	
	==> etcd [1ad6cae6441435160361560f2547f983335fc327b40ea2fbc22366e816e93ac8] <==
	{"level":"warn","ts":"2025-11-01T09:21:32.917914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:32.928326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:32.936985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:32.945355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:32.954062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:32.962958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:32.970979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:32.979477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:32.988162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:32.998043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:33.005110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:33.014010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:33.022501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:33.031159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:33.040108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:33.049451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:33.063583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:33.073178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:33.080166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:33.087168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:33.094000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:33.107098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:33.113882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:33.120938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:21:33.184333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43924","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:22:05 up  1:04,  0 user,  load average: 3.66, 2.87, 1.73
	Linux default-k8s-diff-port-648641 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3958d8d28d261bdbf0db5daf6a5400c15f844ee02b40db9d0610e54aeb5af889] <==
	I1101 09:21:42.360618       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:21:42.360928       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1101 09:21:42.361089       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:21:42.361109       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:21:42.361134       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:21:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:21:42.566045       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:21:42.568373       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:21:42.568399       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:21:42.568539       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:21:42.968791       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:21:42.968831       1 metrics.go:72] Registering metrics
	I1101 09:21:42.998891       1 controller.go:711] "Syncing nftables rules"
	I1101 09:21:52.569195       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:21:52.569286       1 main.go:301] handling current node
	I1101 09:22:02.568961       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:22:02.569021       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0daab395cede210336513964c8d4ee62202b451c71dc2f018723b3c7ecbc64df] <==
	E1101 09:21:33.767522       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1101 09:21:33.814614       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:21:33.819412       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:21:33.819463       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 09:21:33.825417       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:21:33.825469       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:21:33.935480       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:21:34.620186       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:21:34.624618       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:21:34.624642       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:21:35.254163       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:21:35.297406       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:21:35.423123       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:21:35.431719       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1101 09:21:35.432793       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:21:35.439339       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:21:35.655489       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:21:36.574839       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:21:36.586149       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:21:36.595115       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:21:41.348901       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:21:41.660450       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:21:41.671181       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:21:41.701394       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1101 09:22:03.660449       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:42010: use of closed network connection
	
	
	==> kube-controller-manager [e5806b07991d286bb23189e4be98fd28fa5f0ab04e2e27e742bde62a3d6c3872] <==
	I1101 09:21:40.646345       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 09:21:40.646423       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:21:40.646601       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:21:40.646918       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:21:40.646938       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:21:40.646950       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:21:40.646955       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:21:40.647029       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:21:40.647327       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:21:40.647363       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:21:40.647515       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:21:40.648267       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:21:40.648292       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:21:40.651429       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:21:40.651497       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:21:40.651555       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:21:40.651566       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:21:40.651574       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:21:40.654952       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:21:40.656033       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:21:40.658295       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-648641" podCIDRs=["10.244.0.0/24"]
	I1101 09:21:40.661284       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:21:40.681111       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:21:40.684518       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:21:55.599742       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [acb575e89c7e9306d8afecd799407e331622ec47388512a7367fffadfeb105c7] <==
	I1101 09:21:42.145001       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:21:42.211831       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:21:42.312453       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:21:42.312494       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1101 09:21:42.312590       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:21:42.334199       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:21:42.334267       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:21:42.340079       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:21:42.340457       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:21:42.340493       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:21:42.342270       1 config.go:200] "Starting service config controller"
	I1101 09:21:42.342292       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:21:42.342305       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:21:42.342309       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:21:42.342327       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:21:42.342348       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:21:42.343310       1 config.go:309] "Starting node config controller"
	I1101 09:21:42.343331       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:21:42.343339       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:21:42.442524       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:21:42.442596       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:21:42.443762       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6fb0d189536625f4d58378f0a4b9614df3038d4d2f0a8b96c7aa1d21dd85bae7] <==
	E1101 09:21:33.690910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:21:33.691140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:21:33.691492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:21:33.691718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:21:33.691775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:21:33.691808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:21:33.691904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:21:33.691996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:21:33.692084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:21:33.692126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:21:33.692142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:21:34.608189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:21:34.702345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:21:34.727384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:21:34.755033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:21:34.758333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:21:34.765303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:21:34.810964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:21:34.894660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:21:34.913157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:21:34.970528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:21:34.987954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:21:35.008341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 09:21:35.019771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1101 09:21:36.787163       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:21:37 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:37.459270    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-648641" podStartSLOduration=1.459249354 podStartE2EDuration="1.459249354s" podCreationTimestamp="2025-11-01 09:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:21:37.45921209 +0000 UTC m=+1.140618555" watchObservedRunningTime="2025-11-01 09:21:37.459249354 +0000 UTC m=+1.140655811"
	Nov 01 09:21:37 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:37.487783    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-648641" podStartSLOduration=2.487758598 podStartE2EDuration="2.487758598s" podCreationTimestamp="2025-11-01 09:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:21:37.487668746 +0000 UTC m=+1.169075211" watchObservedRunningTime="2025-11-01 09:21:37.487758598 +0000 UTC m=+1.169165061"
	Nov 01 09:21:37 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:37.487921    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-648641" podStartSLOduration=1.4879128719999999 podStartE2EDuration="1.487912872s" podCreationTimestamp="2025-11-01 09:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:21:37.471528573 +0000 UTC m=+1.152935039" watchObservedRunningTime="2025-11-01 09:21:37.487912872 +0000 UTC m=+1.169319339"
	Nov 01 09:21:37 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:37.498836    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-648641" podStartSLOduration=1.498814229 podStartE2EDuration="1.498814229s" podCreationTimestamp="2025-11-01 09:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:21:37.498734816 +0000 UTC m=+1.180141277" watchObservedRunningTime="2025-11-01 09:21:37.498814229 +0000 UTC m=+1.180220695"
	Nov 01 09:21:40 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:40.750559    1324 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 09:21:40 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:40.751326    1324 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 09:21:41 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:41.733464    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/654df017-7b12-4834-b1af-10bb81208e93-kube-proxy\") pod \"kube-proxy-nwrt4\" (UID: \"654df017-7b12-4834-b1af-10bb81208e93\") " pod="kube-system/kube-proxy-nwrt4"
	Nov 01 09:21:41 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:41.734308    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/654df017-7b12-4834-b1af-10bb81208e93-xtables-lock\") pod \"kube-proxy-nwrt4\" (UID: \"654df017-7b12-4834-b1af-10bb81208e93\") " pod="kube-system/kube-proxy-nwrt4"
	Nov 01 09:21:41 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:41.734367    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/654df017-7b12-4834-b1af-10bb81208e93-lib-modules\") pod \"kube-proxy-nwrt4\" (UID: \"654df017-7b12-4834-b1af-10bb81208e93\") " pod="kube-system/kube-proxy-nwrt4"
	Nov 01 09:21:41 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:41.734401    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4khb6\" (UniqueName: \"kubernetes.io/projected/654df017-7b12-4834-b1af-10bb81208e93-kube-api-access-4khb6\") pod \"kube-proxy-nwrt4\" (UID: \"654df017-7b12-4834-b1af-10bb81208e93\") " pod="kube-system/kube-proxy-nwrt4"
	Nov 01 09:21:41 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:41.835062    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d6592d6f-2eb5-439c-8432-879dfed97262-cni-cfg\") pod \"kindnet-fr9cg\" (UID: \"d6592d6f-2eb5-439c-8432-879dfed97262\") " pod="kube-system/kindnet-fr9cg"
	Nov 01 09:21:41 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:41.835113    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6592d6f-2eb5-439c-8432-879dfed97262-xtables-lock\") pod \"kindnet-fr9cg\" (UID: \"d6592d6f-2eb5-439c-8432-879dfed97262\") " pod="kube-system/kindnet-fr9cg"
	Nov 01 09:21:41 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:41.835137    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6592d6f-2eb5-439c-8432-879dfed97262-lib-modules\") pod \"kindnet-fr9cg\" (UID: \"d6592d6f-2eb5-439c-8432-879dfed97262\") " pod="kube-system/kindnet-fr9cg"
	Nov 01 09:21:41 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:41.835159    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7grj\" (UniqueName: \"kubernetes.io/projected/d6592d6f-2eb5-439c-8432-879dfed97262-kube-api-access-r7grj\") pod \"kindnet-fr9cg\" (UID: \"d6592d6f-2eb5-439c-8432-879dfed97262\") " pod="kube-system/kindnet-fr9cg"
	Nov 01 09:21:42 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:42.479474    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-fr9cg" podStartSLOduration=1.4794481560000001 podStartE2EDuration="1.479448156s" podCreationTimestamp="2025-11-01 09:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:21:42.466062105 +0000 UTC m=+6.147468571" watchObservedRunningTime="2025-11-01 09:21:42.479448156 +0000 UTC m=+6.160854622"
	Nov 01 09:21:46 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:46.899140    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nwrt4" podStartSLOduration=5.899114896 podStartE2EDuration="5.899114896s" podCreationTimestamp="2025-11-01 09:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:21:42.491147722 +0000 UTC m=+6.172554188" watchObservedRunningTime="2025-11-01 09:21:46.899114896 +0000 UTC m=+10.580521361"
	Nov 01 09:21:53 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:53.048642    1324 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 09:21:53 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:53.123027    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ff7491d-6812-4d96-a51a-e633029265b2-config-volume\") pod \"coredns-66bc5c9577-nwj2s\" (UID: \"8ff7491d-6812-4d96-a51a-e633029265b2\") " pod="kube-system/coredns-66bc5c9577-nwj2s"
	Nov 01 09:21:53 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:53.123104    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/740e55f6-d6f6-423d-8f2b-8b68885e6d6b-tmp\") pod \"storage-provisioner\" (UID: \"740e55f6-d6f6-423d-8f2b-8b68885e6d6b\") " pod="kube-system/storage-provisioner"
	Nov 01 09:21:53 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:53.123205    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzxc4\" (UniqueName: \"kubernetes.io/projected/8ff7491d-6812-4d96-a51a-e633029265b2-kube-api-access-tzxc4\") pod \"coredns-66bc5c9577-nwj2s\" (UID: \"8ff7491d-6812-4d96-a51a-e633029265b2\") " pod="kube-system/coredns-66bc5c9577-nwj2s"
	Nov 01 09:21:53 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:53.123242    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr8tr\" (UniqueName: \"kubernetes.io/projected/740e55f6-d6f6-423d-8f2b-8b68885e6d6b-kube-api-access-lr8tr\") pod \"storage-provisioner\" (UID: \"740e55f6-d6f6-423d-8f2b-8b68885e6d6b\") " pod="kube-system/storage-provisioner"
	Nov 01 09:21:53 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:53.491848    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.491822638 podStartE2EDuration="11.491822638s" podCreationTimestamp="2025-11-01 09:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:21:53.491497751 +0000 UTC m=+17.172904216" watchObservedRunningTime="2025-11-01 09:21:53.491822638 +0000 UTC m=+17.173229102"
	Nov 01 09:21:53 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:53.503573    1324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nwj2s" podStartSLOduration=12.503547816 podStartE2EDuration="12.503547816s" podCreationTimestamp="2025-11-01 09:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:21:53.503458723 +0000 UTC m=+17.184865189" watchObservedRunningTime="2025-11-01 09:21:53.503547816 +0000 UTC m=+17.184954281"
	Nov 01 09:21:55 default-k8s-diff-port-648641 kubelet[1324]: I1101 09:21:55.639338    1324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ktb8\" (UniqueName: \"kubernetes.io/projected/99fe2b46-4570-4a28-91ed-cea90f970719-kube-api-access-9ktb8\") pod \"busybox\" (UID: \"99fe2b46-4570-4a28-91ed-cea90f970719\") " pod="default/busybox"
	Nov 01 09:22:03 default-k8s-diff-port-648641 kubelet[1324]: E1101 09:22:03.660354    1324 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37388->127.0.0.1:38339: write tcp 127.0.0.1:37388->127.0.0.1:38339: write: broken pipe
	
	
	==> storage-provisioner [11fa4e209ea0da94ece618b3f4489b5442225a299a66c931cf3c9597fb381163] <==
	I1101 09:21:53.435169       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:21:53.443741       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:21:53.443802       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:21:53.446616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:53.451693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:21:53.451893       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:21:53.451987       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a1a9b3ee-ad3e-47d4-8f38-298304c860b4", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-648641_20eeb302-9c06-4c68-8fa8-4cded9097828 became leader
	I1101 09:21:53.452098       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-648641_20eeb302-9c06-4c68-8fa8-4cded9097828!
	W1101 09:21:53.456484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:53.460405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:21:53.552487       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-648641_20eeb302-9c06-4c68-8fa8-4cded9097828!
	W1101 09:21:55.463783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:55.469086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:57.472992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:57.479106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:59.485116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:59.492894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:22:01.497018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:22:01.501057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:22:03.504858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:22:03.509441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:22:05.514248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:22:05.524133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-648641 -n default-k8s-diff-port-648641
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-648641 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-340756 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-340756 --alsologtostderr -v=1: exit status 80 (2.527205184s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-340756 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:22:12.467466  280039 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:22:12.467616  280039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:22:12.467628  280039 out.go:374] Setting ErrFile to fd 2...
	I1101 09:22:12.467635  280039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:22:12.467925  280039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:22:12.468184  280039 out.go:368] Setting JSON to false
	I1101 09:22:12.468230  280039 mustload.go:66] Loading cluster: newest-cni-340756
	I1101 09:22:12.468563  280039 config.go:182] Loaded profile config "newest-cni-340756": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:22:12.469019  280039 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Status}}
	I1101 09:22:12.488888  280039 host.go:66] Checking if "newest-cni-340756" exists ...
	I1101 09:22:12.489249  280039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:22:12.555489  280039 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:86 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-01 09:22:12.544737952 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:22:12.556159  280039 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-340756 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 09:22:12.558335  280039 out.go:179] * Pausing node newest-cni-340756 ... 
	I1101 09:22:12.559591  280039 host.go:66] Checking if "newest-cni-340756" exists ...
	I1101 09:22:12.559938  280039 ssh_runner.go:195] Run: systemctl --version
	I1101 09:22:12.560000  280039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:22:12.580560  280039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:22:12.683503  280039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:22:12.699121  280039 pause.go:52] kubelet running: true
	I1101 09:22:12.699191  280039 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:22:12.847891  280039 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:22:12.848014  280039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:22:12.928837  280039 cri.go:89] found id: "bc38b97b18bb036461b1e9e26d9368291053fc9a73c345e3d2f5c589e50b3cf9"
	I1101 09:22:12.928857  280039 cri.go:89] found id: "ecb09261806b38572384c6d5faf910d9ce8eb7cb6141a0ceaa69fc19f0400922"
	I1101 09:22:12.928890  280039 cri.go:89] found id: "aad36a7a488fb62d76728cb3db23aa210d517cbd490ee24cc0c23c7d3785ffaa"
	I1101 09:22:12.928896  280039 cri.go:89] found id: "70c53a24cc729a3d1a8a2c6693f407f6cbfc3bef1804693e6b09d3b79a7a245a"
	I1101 09:22:12.928901  280039 cri.go:89] found id: "373a67149dd379dbe02a8dd2c5dd1346feb196b4cd96a4a446a405b296b37f88"
	I1101 09:22:12.928906  280039 cri.go:89] found id: "5e4575672f000ae294315e642c93643f4cc9fc2335f9213649ae518477bdd2f6"
	I1101 09:22:12.928911  280039 cri.go:89] found id: ""
	I1101 09:22:12.928971  280039 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:22:12.942301  280039 retry.go:31] will retry after 359.816046ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:22:12Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:22:13.302917  280039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:22:13.316239  280039 pause.go:52] kubelet running: false
	I1101 09:22:13.316302  280039 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:22:13.459507  280039 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:22:13.459603  280039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:22:13.531280  280039 cri.go:89] found id: "bc38b97b18bb036461b1e9e26d9368291053fc9a73c345e3d2f5c589e50b3cf9"
	I1101 09:22:13.531307  280039 cri.go:89] found id: "ecb09261806b38572384c6d5faf910d9ce8eb7cb6141a0ceaa69fc19f0400922"
	I1101 09:22:13.531311  280039 cri.go:89] found id: "aad36a7a488fb62d76728cb3db23aa210d517cbd490ee24cc0c23c7d3785ffaa"
	I1101 09:22:13.531314  280039 cri.go:89] found id: "70c53a24cc729a3d1a8a2c6693f407f6cbfc3bef1804693e6b09d3b79a7a245a"
	I1101 09:22:13.531317  280039 cri.go:89] found id: "373a67149dd379dbe02a8dd2c5dd1346feb196b4cd96a4a446a405b296b37f88"
	I1101 09:22:13.531321  280039 cri.go:89] found id: "5e4575672f000ae294315e642c93643f4cc9fc2335f9213649ae518477bdd2f6"
	I1101 09:22:13.531323  280039 cri.go:89] found id: ""
	I1101 09:22:13.531373  280039 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:22:13.544419  280039 retry.go:31] will retry after 317.170373ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:22:13Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:22:13.862046  280039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:22:13.875229  280039 pause.go:52] kubelet running: false
	I1101 09:22:13.875286  280039 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:22:13.986461  280039 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:22:13.986549  280039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:22:14.055584  280039 cri.go:89] found id: "bc38b97b18bb036461b1e9e26d9368291053fc9a73c345e3d2f5c589e50b3cf9"
	I1101 09:22:14.055610  280039 cri.go:89] found id: "ecb09261806b38572384c6d5faf910d9ce8eb7cb6141a0ceaa69fc19f0400922"
	I1101 09:22:14.055617  280039 cri.go:89] found id: "aad36a7a488fb62d76728cb3db23aa210d517cbd490ee24cc0c23c7d3785ffaa"
	I1101 09:22:14.055621  280039 cri.go:89] found id: "70c53a24cc729a3d1a8a2c6693f407f6cbfc3bef1804693e6b09d3b79a7a245a"
	I1101 09:22:14.055625  280039 cri.go:89] found id: "373a67149dd379dbe02a8dd2c5dd1346feb196b4cd96a4a446a405b296b37f88"
	I1101 09:22:14.055630  280039 cri.go:89] found id: "5e4575672f000ae294315e642c93643f4cc9fc2335f9213649ae518477bdd2f6"
	I1101 09:22:14.055634  280039 cri.go:89] found id: ""
	I1101 09:22:14.055689  280039 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:22:14.068182  280039 retry.go:31] will retry after 593.119479ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:22:14Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:22:14.661982  280039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:22:14.675137  280039 pause.go:52] kubelet running: false
	I1101 09:22:14.675198  280039 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:22:14.813684  280039 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:22:14.813791  280039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:22:14.891210  280039 cri.go:89] found id: "bc38b97b18bb036461b1e9e26d9368291053fc9a73c345e3d2f5c589e50b3cf9"
	I1101 09:22:14.891235  280039 cri.go:89] found id: "ecb09261806b38572384c6d5faf910d9ce8eb7cb6141a0ceaa69fc19f0400922"
	I1101 09:22:14.891240  280039 cri.go:89] found id: "aad36a7a488fb62d76728cb3db23aa210d517cbd490ee24cc0c23c7d3785ffaa"
	I1101 09:22:14.891245  280039 cri.go:89] found id: "70c53a24cc729a3d1a8a2c6693f407f6cbfc3bef1804693e6b09d3b79a7a245a"
	I1101 09:22:14.891249  280039 cri.go:89] found id: "373a67149dd379dbe02a8dd2c5dd1346feb196b4cd96a4a446a405b296b37f88"
	I1101 09:22:14.891253  280039 cri.go:89] found id: "5e4575672f000ae294315e642c93643f4cc9fc2335f9213649ae518477bdd2f6"
	I1101 09:22:14.891256  280039 cri.go:89] found id: ""
	I1101 09:22:14.891306  280039 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:22:14.912935  280039 out.go:203] 
	W1101 09:22:14.914992  280039 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:22:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:22:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:22:14.915015  280039 out.go:285] * 
	* 
	W1101 09:22:14.919043  280039 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:22:14.920344  280039 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-340756 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-340756
helpers_test.go:243: (dbg) docker inspect newest-cni-340756:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b",
	        "Created": "2025-11-01T09:21:23.482376732Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 273823,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:22:00.001656371Z",
	            "FinishedAt": "2025-11-01T09:21:58.895172046Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b/hostname",
	        "HostsPath": "/var/lib/docker/containers/9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b/hosts",
	        "LogPath": "/var/lib/docker/containers/9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b/9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b-json.log",
	        "Name": "/newest-cni-340756",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-340756:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-340756",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b",
	                "LowerDir": "/var/lib/docker/overlay2/79a5b3fa0361a2a9c5d3edbeca3366aecf897b34708fba6c670fef7311204878-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/79a5b3fa0361a2a9c5d3edbeca3366aecf897b34708fba6c670fef7311204878/merged",
	                "UpperDir": "/var/lib/docker/overlay2/79a5b3fa0361a2a9c5d3edbeca3366aecf897b34708fba6c670fef7311204878/diff",
	                "WorkDir": "/var/lib/docker/overlay2/79a5b3fa0361a2a9c5d3edbeca3366aecf897b34708fba6c670fef7311204878/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-340756",
	                "Source": "/var/lib/docker/volumes/newest-cni-340756/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-340756",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-340756",
	                "name.minikube.sigs.k8s.io": "newest-cni-340756",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "561fb00f1940aa01fcb0a4250de8744e1523a36f72770fbfc5a55c9c7786e3a3",
	            "SandboxKey": "/var/run/docker/netns/561fb00f1940",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-340756": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:a0:bd:11:5f:58",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6d98c8d1b523eaf92b0807c4ccbd2e833f29938a64f5a83fb094948eae42b694",
	                    "EndpointID": "9c129a9335972736767b14a5193462aee8d2af40aeb547815cf73930f371c7df",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-340756",
	                        "9977e921720f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-340756 -n newest-cni-340756
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-340756 -n newest-cni-340756: exit status 2 (353.749486ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-340756 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-340756 logs -n 25: (1.148354963s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-397460 image list --format=json                                                                                                                                                                                                    │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p no-preload-397460 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ delete  │ -p old-k8s-version-152344                                                                                                                                                                                                                     │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p old-k8s-version-152344                                                                                                                                                                                                                     │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p no-preload-397460                                                                                                                                                                                                                          │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p disable-driver-mounts-366530                                                                                                                                                                                                               │ disable-driver-mounts-366530 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ start   │ -p default-k8s-diff-port-648641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-648641 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p no-preload-397460                                                                                                                                                                                                                          │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ start   │ -p newest-cni-340756 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-340756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ stop    │ -p newest-cni-340756 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ image   │ embed-certs-236314 image list --format=json                                                                                                                                                                                                   │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p embed-certs-236314 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-340756 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ start   │ -p newest-cni-340756 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:22 UTC │
	│ delete  │ -p embed-certs-236314                                                                                                                                                                                                                         │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │ 01 Nov 25 09:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-648641 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-648641 │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │                     │
	│ start   │ -p kubernetes-upgrade-846924 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-846924    │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │                     │
	│ start   │ -p kubernetes-upgrade-846924 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-846924    │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │ 01 Nov 25 09:22 UTC │
	│ delete  │ -p embed-certs-236314                                                                                                                                                                                                                         │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │ 01 Nov 25 09:22 UTC │
	│ start   │ -p auto-204434 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-204434                  │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-648641 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-648641 │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │                     │
	│ image   │ newest-cni-340756 image list --format=json                                                                                                                                                                                                    │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │ 01 Nov 25 09:22 UTC │
	│ pause   │ -p newest-cni-340756 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-846924                                                                                                                                                                                                                  │ kubernetes-upgrade-846924    │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:22:05
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:22:05.711340  276407 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:22:05.711670  276407 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:22:05.711684  276407 out.go:374] Setting ErrFile to fd 2...
	I1101 09:22:05.711691  276407 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:22:05.711998  276407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:22:05.712532  276407 out.go:368] Setting JSON to false
	I1101 09:22:05.713762  276407 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3874,"bootTime":1761985052,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:22:05.713858  276407 start.go:143] virtualization: kvm guest
	I1101 09:22:05.715800  276407 out.go:179] * [auto-204434] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:22:05.717701  276407 notify.go:221] Checking for updates...
	I1101 09:22:05.717725  276407 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:22:05.719593  276407 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:22:05.720893  276407 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:22:05.722275  276407 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:22:05.723443  276407 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:22:05.724494  276407 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:22:05.727185  276407 config.go:182] Loaded profile config "default-k8s-diff-port-648641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:22:05.727323  276407 config.go:182] Loaded profile config "kubernetes-upgrade-846924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:22:05.727498  276407 config.go:182] Loaded profile config "newest-cni-340756": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:22:05.727627  276407 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:22:05.758694  276407 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:22:05.758798  276407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:22:05.846249  276407 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 09:22:05.829664288 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:22:05.846418  276407 docker.go:319] overlay module found
	I1101 09:22:05.847981  276407 out.go:179] * Using the docker driver based on user configuration
	I1101 09:22:05.849131  276407 start.go:309] selected driver: docker
	I1101 09:22:05.849146  276407 start.go:930] validating driver "docker" against <nil>
	I1101 09:22:05.849159  276407 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:22:05.849755  276407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:22:05.933002  276407 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 09:22:05.918086581 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:22:05.933194  276407 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:22:05.933449  276407 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:22:05.935998  276407 out.go:179] * Using Docker driver with root privileges
	I1101 09:22:05.937238  276407 cni.go:84] Creating CNI manager for ""
	I1101 09:22:05.937322  276407 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:22:05.937337  276407 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:22:05.937445  276407 start.go:353] cluster config:
	{Name:auto-204434 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-204434 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1101 09:22:05.938599  276407 out.go:179] * Starting "auto-204434" primary control-plane node in "auto-204434" cluster
	I1101 09:22:05.939994  276407 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:22:05.941821  276407 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:22:05.943201  276407 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:22:05.943255  276407 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:22:05.943283  276407 cache.go:59] Caching tarball of preloaded images
	I1101 09:22:05.943341  276407 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:22:05.943376  276407 preload.go:233] Found /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:22:05.943387  276407 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:22:05.943523  276407 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/auto-204434/config.json ...
	I1101 09:22:05.943548  276407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/auto-204434/config.json: {Name:mk5f54a1765a9c24120e43a140907f34571a38cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:22:05.974112  276407 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:22:05.974139  276407 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:22:05.974162  276407 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:22:05.974193  276407 start.go:360] acquireMachinesLock for auto-204434: {Name:mkcf97ab3aa07489c0f785f70ef0c30e8d690267 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:22:05.974316  276407 start.go:364] duration metric: took 99.396µs to acquireMachinesLock for "auto-204434"
	I1101 09:22:05.974353  276407 start.go:93] Provisioning new machine with config: &{Name:auto-204434 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-204434 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:22:05.974440  276407 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:22:04.806905  273527 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:22:04.806938  273527 machine.go:97] duration metric: took 4.462381762s to provisionDockerMachine
	I1101 09:22:04.806957  273527 start.go:293] postStartSetup for "newest-cni-340756" (driver="docker")
	I1101 09:22:04.806970  273527 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:22:04.807047  273527 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:22:04.807122  273527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:22:04.830006  273527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:22:04.940674  273527 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:22:04.945561  273527 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:22:04.945604  273527 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:22:04.945617  273527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/addons for local assets ...
	I1101 09:22:04.945686  273527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/files for local assets ...
	I1101 09:22:04.945818  273527 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem -> 94142.pem in /etc/ssl/certs
	I1101 09:22:04.945959  273527 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:22:04.957349  273527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:22:04.981372  273527 start.go:296] duration metric: took 174.396915ms for postStartSetup
	I1101 09:22:04.981481  273527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:22:04.981542  273527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:22:05.005381  273527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:22:05.115054  273527 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:22:05.120762  273527 fix.go:56] duration metric: took 5.186211788s for fixHost
	I1101 09:22:05.120799  273527 start.go:83] releasing machines lock for "newest-cni-340756", held for 5.18627811s
	I1101 09:22:05.120901  273527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-340756
	I1101 09:22:05.145509  273527 ssh_runner.go:195] Run: cat /version.json
	I1101 09:22:05.145545  273527 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:22:05.145570  273527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:22:05.145608  273527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:22:05.169584  273527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:22:05.172179  273527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:22:05.354981  273527 ssh_runner.go:195] Run: systemctl --version
	I1101 09:22:05.363833  273527 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:22:05.415632  273527 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:22:05.422017  273527 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:22:05.422102  273527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:22:05.436004  273527 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:22:05.436029  273527 start.go:496] detecting cgroup driver to use...
	I1101 09:22:05.436071  273527 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:22:05.436133  273527 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:22:05.460442  273527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:22:05.478694  273527 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:22:05.478755  273527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:22:05.499017  273527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:22:05.515241  273527 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:22:05.631626  273527 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:22:05.743645  273527 docker.go:234] disabling docker service ...
	I1101 09:22:05.743722  273527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:22:05.765717  273527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:22:05.781219  273527 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:22:05.904600  273527 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:22:06.011255  273527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:22:06.026419  273527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:22:06.044799  273527 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:22:06.044859  273527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:22:06.055826  273527 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:22:06.055905  273527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:22:06.067326  273527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:22:06.079138  273527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:22:06.090185  273527 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:22:06.100520  273527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:22:06.111691  273527 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:22:06.123545  273527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:22:06.140005  273527 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:22:06.152026  273527 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:22:06.164001  273527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:22:06.270216  273527 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:22:06.417294  273527 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:22:06.417382  273527 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:22:06.423222  273527 start.go:564] Will wait 60s for crictl version
	I1101 09:22:06.423300  273527 ssh_runner.go:195] Run: which crictl
	I1101 09:22:06.427749  273527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:22:06.458152  273527 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:22:06.458212  273527 ssh_runner.go:195] Run: crio --version
	I1101 09:22:06.503267  273527 ssh_runner.go:195] Run: crio --version
	I1101 09:22:06.550281  273527 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:22:06.551816  273527 cli_runner.go:164] Run: docker network inspect newest-cni-340756 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:22:06.576910  273527 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1101 09:22:06.582012  273527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:22:06.608950  273527 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 09:22:06.610040  273527 kubeadm.go:884] updating cluster {Name:newest-cni-340756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-340756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:22:06.610163  273527 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:22:06.610227  273527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:22:06.683653  273527 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:22:06.683693  273527 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:22:06.683760  273527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:22:06.723121  273527 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:22:06.723144  273527 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:22:06.723151  273527 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1101 09:22:06.723246  273527 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-340756 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-340756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:22:06.723306  273527 ssh_runner.go:195] Run: crio config
	I1101 09:22:06.786111  273527 cni.go:84] Creating CNI manager for ""
	I1101 09:22:06.786135  273527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:22:06.786156  273527 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 09:22:06.786177  273527 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-340756 NodeName:newest-cni-340756 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:22:06.786300  273527 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-340756"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:22:06.786369  273527 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:22:06.796608  273527 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:22:06.796684  273527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:22:06.806326  273527 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 09:22:06.820628  273527 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:22:06.837121  273527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1101 09:22:06.857636  273527 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:22:06.862150  273527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:22:06.874015  273527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:22:06.979096  273527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:22:07.007290  273527 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756 for IP: 192.168.94.2
	I1101 09:22:07.007317  273527 certs.go:195] generating shared ca certs ...
	I1101 09:22:07.007341  273527 certs.go:227] acquiring lock for ca certs: {Name:mkfdee6a84670347521013ebeef165551380cb9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:22:07.007520  273527 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key
	I1101 09:22:07.007574  273527 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key
	I1101 09:22:07.007589  273527 certs.go:257] generating profile certs ...
	I1101 09:22:07.007695  273527 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/client.key
	I1101 09:22:07.007773  273527 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.key.b81bb48a
	I1101 09:22:07.007825  273527 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/proxy-client.key
	I1101 09:22:07.008017  273527 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem (1338 bytes)
	W1101 09:22:07.008065  273527 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414_empty.pem, impossibly tiny 0 bytes
	I1101 09:22:07.008081  273527 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:22:07.008117  273527 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:22:07.008159  273527 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:22:07.008188  273527 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem (1675 bytes)
	I1101 09:22:07.008241  273527 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:22:07.008838  273527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:22:07.031200  273527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:22:07.056448  273527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:22:07.081129  273527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:22:07.110647  273527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 09:22:07.134599  273527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:22:07.154986  273527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:22:07.174892  273527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/newest-cni-340756/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:22:07.196294  273527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /usr/share/ca-certificates/94142.pem (1708 bytes)
	I1101 09:22:07.221008  273527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:22:07.244489  273527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem --> /usr/share/ca-certificates/9414.pem (1338 bytes)
	I1101 09:22:07.265192  273527 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:22:07.280295  273527 ssh_runner.go:195] Run: openssl version
	I1101 09:22:07.287645  273527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94142.pem && ln -fs /usr/share/ca-certificates/94142.pem /etc/ssl/certs/94142.pem"
	I1101 09:22:07.298090  273527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94142.pem
	I1101 09:22:07.302758  273527 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:35 /usr/share/ca-certificates/94142.pem
	I1101 09:22:07.302825  273527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94142.pem
	I1101 09:22:07.340567  273527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94142.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:22:07.351497  273527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:22:07.361526  273527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:22:07.365913  273527 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:22:07.365998  273527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:22:07.412071  273527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:22:07.421925  273527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9414.pem && ln -fs /usr/share/ca-certificates/9414.pem /etc/ssl/certs/9414.pem"
	I1101 09:22:07.431580  273527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9414.pem
	I1101 09:22:07.435832  273527 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:35 /usr/share/ca-certificates/9414.pem
	I1101 09:22:07.435916  273527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9414.pem
	I1101 09:22:07.473513  273527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9414.pem /etc/ssl/certs/51391683.0"
	I1101 09:22:07.482637  273527 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:22:07.487075  273527 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:22:07.546182  273527 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:22:07.605844  273527 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:22:07.647398  273527 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:22:07.699202  273527 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:22:07.753456  273527 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:22:07.801614  273527 kubeadm.go:401] StartCluster: {Name:newest-cni-340756 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-340756 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:22:07.801737  273527 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:22:07.801801  273527 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:22:07.837412  273527 cri.go:89] found id: ""
	I1101 09:22:07.837482  273527 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:22:07.847014  273527 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:22:07.847037  273527 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:22:07.847102  273527 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:22:07.858069  273527 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:22:07.859275  273527 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-340756" does not appear in /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:22:07.859859  273527 kubeconfig.go:62] /home/jenkins/minikube-integration/21835-5913/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-340756" cluster setting kubeconfig missing "newest-cni-340756" context setting]
	I1101 09:22:07.860858  273527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:22:07.975505  273527 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:22:07.990645  273527 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1101 09:22:07.990688  273527 kubeadm.go:602] duration metric: took 143.644663ms to restartPrimaryControlPlane
	I1101 09:22:07.990701  273527 kubeadm.go:403] duration metric: took 189.101498ms to StartCluster
	I1101 09:22:07.990723  273527 settings.go:142] acquiring lock: {Name:mkb1ba7d0d4bb15f3f0746ce486d72703f901580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:22:07.990790  273527 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:22:07.992386  273527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:22:07.998489  273527 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:22:07.998722  273527 config.go:182] Loaded profile config "newest-cni-340756": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:22:07.998795  273527 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:22:07.998903  273527 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-340756"
	I1101 09:22:07.998932  273527 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-340756"
	W1101 09:22:07.998940  273527 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:22:07.998966  273527 host.go:66] Checking if "newest-cni-340756" exists ...
	I1101 09:22:07.999515  273527 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Status}}
	I1101 09:22:07.999612  273527 addons.go:70] Setting dashboard=true in profile "newest-cni-340756"
	I1101 09:22:07.999641  273527 addons.go:239] Setting addon dashboard=true in "newest-cni-340756"
	W1101 09:22:07.999650  273527 addons.go:248] addon dashboard should already be in state true
	I1101 09:22:07.999682  273527 host.go:66] Checking if "newest-cni-340756" exists ...
	I1101 09:22:07.999737  273527 addons.go:70] Setting default-storageclass=true in profile "newest-cni-340756"
	I1101 09:22:07.999764  273527 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-340756"
	I1101 09:22:08.000063  273527 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Status}}
	I1101 09:22:08.000953  273527 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Status}}
	I1101 09:22:08.029554  273527 out.go:179] * Verifying Kubernetes components...
	I1101 09:22:08.036361  273527 addons.go:239] Setting addon default-storageclass=true in "newest-cni-340756"
	W1101 09:22:08.036389  273527 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:22:08.036416  273527 host.go:66] Checking if "newest-cni-340756" exists ...
	I1101 09:22:08.036991  273527 cli_runner.go:164] Run: docker container inspect newest-cni-340756 --format={{.State.Status}}
	I1101 09:22:08.059837  273527 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:22:08.059874  273527 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:22:08.059952  273527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:22:08.082574  273527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:22:08.160110  273527 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:22:08.160188  273527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:22:08.160122  273527 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 09:22:08.199601  273527 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:22:08.199627  273527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:22:08.199657  273527 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 09:22:08.199698  273527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:22:08.201451  273527 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 09:22:08.201530  273527 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 09:22:08.201600  273527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-340756
	I1101 09:22:08.218657  273527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:22:08.229178  273527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:22:08.236783  273527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/newest-cni-340756/id_rsa Username:docker}
	I1101 09:22:08.290570  273527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1101 09:22:08.322269  273527 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:22:08.322338  273527 retry.go:31] will retry after 218.781993ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:22:08.327574  273527 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:22:08.327656  273527 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:22:08.363326  273527 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 09:22:08.363357  273527 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 09:22:08.384314  273527 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 09:22:08.384342  273527 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 09:22:08.395381  273527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:22:08.415512  273527 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 09:22:08.415538  273527 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 09:22:08.440755  273527 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 09:22:08.440782  273527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 09:22:08.481486  273527 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 09:22:08.481523  273527 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 09:22:08.514579  273527 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 09:22:08.514602  273527 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W1101 09:22:08.517571  273527 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:22:08.517669  273527 retry.go:31] will retry after 127.472154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:22:08.536332  273527 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 09:22:08.536357  273527 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 09:22:08.541813  273527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:22:08.555217  273527 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 09:22:08.555302  273527 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 09:22:08.574831  273527 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:22:08.575045  273527 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 09:22:08.599509  273527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:22:08.646806  273527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:22:08.828442  273527 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:22:05.647079  276088 out.go:252] * Updating the running docker "kubernetes-upgrade-846924" container ...
	I1101 09:22:05.647128  276088 machine.go:94] provisionDockerMachine start ...
	I1101 09:22:05.647215  276088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-846924
	I1101 09:22:05.674134  276088 main.go:143] libmachine: Using SSH client type: native
	I1101 09:22:05.674482  276088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1101 09:22:05.674499  276088 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:22:05.837994  276088 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-846924
	
	I1101 09:22:05.838030  276088 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-846924"
	I1101 09:22:05.838089  276088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-846924
	I1101 09:22:05.863288  276088 main.go:143] libmachine: Using SSH client type: native
	I1101 09:22:05.863502  276088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1101 09:22:05.863515  276088 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-846924 && echo "kubernetes-upgrade-846924" | sudo tee /etc/hostname
	I1101 09:22:06.033260  276088 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-846924
	
	I1101 09:22:06.033352  276088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-846924
	I1101 09:22:06.057362  276088 main.go:143] libmachine: Using SSH client type: native
	I1101 09:22:06.057657  276088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1101 09:22:06.057689  276088 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-846924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-846924/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-846924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:22:06.216887  276088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:22:06.216941  276088 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5913/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5913/.minikube}
	I1101 09:22:06.216973  276088 ubuntu.go:190] setting up certificates
	I1101 09:22:06.216987  276088 provision.go:84] configureAuth start
	I1101 09:22:06.217052  276088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-846924
	I1101 09:22:06.245626  276088 provision.go:143] copyHostCerts
	I1101 09:22:06.245695  276088 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem, removing ...
	I1101 09:22:06.245713  276088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem
	I1101 09:22:06.245791  276088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem (1078 bytes)
	I1101 09:22:06.245941  276088 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem, removing ...
	I1101 09:22:06.245955  276088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem
	I1101 09:22:06.246010  276088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem (1123 bytes)
	I1101 09:22:06.246117  276088 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem, removing ...
	I1101 09:22:06.246132  276088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem
	I1101 09:22:06.246183  276088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem (1675 bytes)
	I1101 09:22:06.246287  276088 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-846924 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-846924 localhost minikube]
	I1101 09:22:06.317361  276088 provision.go:177] copyRemoteCerts
	I1101 09:22:06.317427  276088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:22:06.317483  276088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-846924
	I1101 09:22:06.343148  276088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/kubernetes-upgrade-846924/id_rsa Username:docker}
	I1101 09:22:06.456260  276088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:22:06.485037  276088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 09:22:06.517756  276088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:22:06.544234  276088 provision.go:87] duration metric: took 327.229113ms to configureAuth
	I1101 09:22:06.544271  276088 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:22:06.544471  276088 config.go:182] Loaded profile config "kubernetes-upgrade-846924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:22:06.544617  276088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-846924
	I1101 09:22:06.569285  276088 main.go:143] libmachine: Using SSH client type: native
	I1101 09:22:06.569591  276088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1101 09:22:06.569624  276088 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:22:07.204568  276088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:22:07.204604  276088 machine.go:97] duration metric: took 1.557463214s to provisionDockerMachine
	I1101 09:22:07.204621  276088 start.go:293] postStartSetup for "kubernetes-upgrade-846924" (driver="docker")
	I1101 09:22:07.204636  276088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:22:07.204753  276088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:22:07.204818  276088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-846924
	I1101 09:22:07.228728  276088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/kubernetes-upgrade-846924/id_rsa Username:docker}
	I1101 09:22:07.334185  276088 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:22:07.338285  276088 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:22:07.338321  276088 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:22:07.338335  276088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/addons for local assets ...
	I1101 09:22:07.338391  276088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/files for local assets ...
	I1101 09:22:07.338532  276088 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem -> 94142.pem in /etc/ssl/certs
	I1101 09:22:07.338685  276088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:22:07.348802  276088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:22:07.369195  276088 start.go:296] duration metric: took 164.558801ms for postStartSetup
	I1101 09:22:07.369285  276088 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:22:07.369328  276088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-846924
	I1101 09:22:07.392303  276088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/kubernetes-upgrade-846924/id_rsa Username:docker}
	I1101 09:22:07.496144  276088 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:22:07.505141  276088 fix.go:56] duration metric: took 1.884787218s for fixHost
	I1101 09:22:07.505175  276088 start.go:83] releasing machines lock for "kubernetes-upgrade-846924", held for 1.88484515s
	I1101 09:22:07.505267  276088 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-846924
	I1101 09:22:07.539620  276088 ssh_runner.go:195] Run: cat /version.json
	I1101 09:22:07.539692  276088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-846924
	I1101 09:22:07.539941  276088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:22:07.540028  276088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-846924
	I1101 09:22:07.563799  276088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/kubernetes-upgrade-846924/id_rsa Username:docker}
	I1101 09:22:07.564572  276088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/kubernetes-upgrade-846924/id_rsa Username:docker}
	I1101 09:22:07.741477  276088 ssh_runner.go:195] Run: systemctl --version
	I1101 09:22:07.749692  276088 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:22:07.852381  276088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:22:07.859149  276088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:22:07.859221  276088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:22:07.870525  276088 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:22:07.870571  276088 start.go:496] detecting cgroup driver to use...
	I1101 09:22:07.870610  276088 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:22:07.870689  276088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:22:07.892776  276088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:22:07.909177  276088 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:22:07.909247  276088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:22:07.927490  276088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:22:07.946398  276088 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:22:08.089522  276088 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:22:08.225706  276088 docker.go:234] disabling docker service ...
	I1101 09:22:08.225780  276088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:22:08.250209  276088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:22:08.270613  276088 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:22:08.418308  276088 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:22:08.594788  276088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:22:08.619481  276088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:22:08.644670  276088 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:22:08.644727  276088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:22:08.659321  276088 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:22:08.659498  276088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:22:08.673806  276088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:22:08.687889  276088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:22:08.701421  276088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:22:08.714384  276088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:22:08.731972  276088 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:22:08.748600  276088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:22:08.763323  276088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:22:08.774843  276088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:22:08.785781  276088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:22:08.925357  276088 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:22:05.976519  276407 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:22:05.976793  276407 start.go:159] libmachine.API.Create for "auto-204434" (driver="docker")
	I1101 09:22:05.976831  276407 client.go:173] LocalClient.Create starting
	I1101 09:22:05.976935  276407 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem
	I1101 09:22:05.976985  276407 main.go:143] libmachine: Decoding PEM data...
	I1101 09:22:05.977010  276407 main.go:143] libmachine: Parsing certificate...
	I1101 09:22:05.977101  276407 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem
	I1101 09:22:05.977141  276407 main.go:143] libmachine: Decoding PEM data...
	I1101 09:22:05.977159  276407 main.go:143] libmachine: Parsing certificate...
	I1101 09:22:05.977526  276407 cli_runner.go:164] Run: docker network inspect auto-204434 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:22:05.999616  276407 cli_runner.go:211] docker network inspect auto-204434 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:22:05.999690  276407 network_create.go:284] running [docker network inspect auto-204434] to gather additional debugging logs...
	I1101 09:22:05.999716  276407 cli_runner.go:164] Run: docker network inspect auto-204434
	W1101 09:22:06.020232  276407 cli_runner.go:211] docker network inspect auto-204434 returned with exit code 1
	I1101 09:22:06.020274  276407 network_create.go:287] error running [docker network inspect auto-204434]: docker network inspect auto-204434: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-204434 not found
	I1101 09:22:06.020298  276407 network_create.go:289] output of [docker network inspect auto-204434]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-204434 not found
	
	** /stderr **
	I1101 09:22:06.020436  276407 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:22:06.042618  276407 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5f44df6b5a5b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:38:92:20:b3:ae} reservation:<nil>}
	I1101 09:22:06.043430  276407 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ec772021a1d5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:14:7e:99:b1:e5} reservation:<nil>}
	I1101 09:22:06.044395  276407 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6ef14c0d2e1a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:5b:36:d5:85:2b} reservation:<nil>}
	I1101 09:22:06.045516  276407 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb28a0}
	I1101 09:22:06.045539  276407 network_create.go:124] attempt to create docker network auto-204434 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 09:22:06.045576  276407 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-204434 auto-204434
	I1101 09:22:06.116244  276407 network_create.go:108] docker network auto-204434 192.168.76.0/24 created
	I1101 09:22:06.116278  276407 kic.go:121] calculated static IP "192.168.76.2" for the "auto-204434" container
	I1101 09:22:06.116348  276407 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:22:06.143457  276407 cli_runner.go:164] Run: docker volume create auto-204434 --label name.minikube.sigs.k8s.io=auto-204434 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:22:06.170650  276407 oci.go:103] Successfully created a docker volume auto-204434
	I1101 09:22:06.170756  276407 cli_runner.go:164] Run: docker run --rm --name auto-204434-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-204434 --entrypoint /usr/bin/test -v auto-204434:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:22:06.663169  276407 oci.go:107] Successfully prepared a docker volume auto-204434
	I1101 09:22:06.663228  276407 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:22:06.663255  276407 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:22:06.663326  276407 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-204434:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 09:22:10.485138  276407 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-204434:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (3.821719226s)
	I1101 09:22:10.485173  276407 kic.go:203] duration metric: took 3.821915018s to extract preloaded images to volume ...
	W1101 09:22:10.485264  276407 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 09:22:10.485306  276407 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 09:22:10.485358  276407 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:22:10.575992  276407 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-204434 --name auto-204434 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-204434 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-204434 --network auto-204434 --ip 192.168.76.2 --volume auto-204434:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:22:10.641344  276088 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.715941494s)
	I1101 09:22:10.641379  276088 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:22:10.641430  276088 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:22:10.646814  276088 start.go:564] Will wait 60s for crictl version
	I1101 09:22:10.646919  276088 ssh_runner.go:195] Run: which crictl
	I1101 09:22:10.651419  276088 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:22:10.681572  276088 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:22:10.681682  276088 ssh_runner.go:195] Run: crio --version
	I1101 09:22:10.719255  276088 ssh_runner.go:195] Run: crio --version
	I1101 09:22:10.767399  276088 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:22:10.416760  273527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.874909085s)
	I1101 09:22:11.009824  273527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.410270794s)
	I1101 09:22:11.011277  273527 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-340756 addons enable metrics-server
	
	I1101 09:22:11.101539  273527 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.454688543s)
	I1101 09:22:11.101606  273527 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.273124581s)
	I1101 09:22:11.101638  273527 api_server.go:72] duration metric: took 3.103091722s to wait for apiserver process to appear ...
	I1101 09:22:11.101649  273527 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:22:11.101670  273527 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 09:22:11.103555  273527 out.go:179] * Enabled addons: default-storageclass, dashboard, storage-provisioner
	I1101 09:22:11.104893  273527 addons.go:515] duration metric: took 3.106071791s for enable addons: enabled=[default-storageclass dashboard storage-provisioner]
	I1101 09:22:11.106539  273527 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:22:11.106572  273527 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:22:11.601966  273527 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 09:22:11.607311  273527 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1101 09:22:11.608417  273527 api_server.go:141] control plane version: v1.34.1
	I1101 09:22:11.608446  273527 api_server.go:131] duration metric: took 506.791046ms to wait for apiserver health ...
	I1101 09:22:11.608454  273527 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:22:11.612066  273527 system_pods.go:59] 8 kube-system pods found
	I1101 09:22:11.612117  273527 system_pods.go:61] "coredns-66bc5c9577-tmnp2" [3dc7a625-aa33-404e-b8e1-4abff976bac9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:22:11.612131  273527 system_pods.go:61] "etcd-newest-cni-340756" [5ba122dc-81df-44c9-b993-82d2381dd60c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:22:11.612146  273527 system_pods.go:61] "kindnet-gjnst" [9c4e4a33-eff1-47ec-94bc-7f9196c547ff] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:22:11.612167  273527 system_pods.go:61] "kube-apiserver-newest-cni-340756" [fefc943a-a3b3-4069-9eed-d6a6815d3846] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:22:11.612179  273527 system_pods.go:61] "kube-controller-manager-newest-cni-340756" [f6823fe4-7c7e-4b04-8fbd-f52058100d5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:22:11.612191  273527 system_pods.go:61] "kube-proxy-wp2h9" [e6a908ac-4dfb-4f1c-8059-79695562a817] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:22:11.612201  273527 system_pods.go:61] "kube-scheduler-newest-cni-340756" [4673d267-6290-4f99-af1c-173b383aa4ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:22:11.612211  273527 system_pods.go:61] "storage-provisioner" [0e7d7956-489a-4005-ba49-4975f35bfc8a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:22:11.612222  273527 system_pods.go:74] duration metric: took 3.761459ms to wait for pod list to return data ...
	I1101 09:22:11.612237  273527 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:22:11.615307  273527 default_sa.go:45] found service account: "default"
	I1101 09:22:11.615337  273527 default_sa.go:55] duration metric: took 3.089636ms for default service account to be created ...
	I1101 09:22:11.615352  273527 kubeadm.go:587] duration metric: took 3.616804608s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:22:11.615373  273527 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:22:11.618241  273527 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:22:11.618268  273527 node_conditions.go:123] node cpu capacity is 8
	I1101 09:22:11.618284  273527 node_conditions.go:105] duration metric: took 2.903073ms to run NodePressure ...
	I1101 09:22:11.618299  273527 start.go:242] waiting for startup goroutines ...
	I1101 09:22:11.618308  273527 start.go:247] waiting for cluster config update ...
	I1101 09:22:11.618323  273527 start.go:256] writing updated cluster config ...
	I1101 09:22:11.618654  273527 ssh_runner.go:195] Run: rm -f paused
	I1101 09:22:11.681927  273527 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:22:11.683708  273527 out.go:179] * Done! kubectl is now configured to use "newest-cni-340756" cluster and "default" namespace by default
	I1101 09:22:10.768596  276088 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-846924 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:22:10.792198  276088 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 09:22:10.797446  276088 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-846924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-846924 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:22:10.797612  276088 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:22:10.797664  276088 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:22:10.840546  276088 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:22:10.840572  276088 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:22:10.840629  276088 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:22:10.877778  276088 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:22:10.877805  276088 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:22:10.877814  276088 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 09:22:10.877951  276088 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-846924 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-846924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:22:10.878021  276088 ssh_runner.go:195] Run: crio config
	I1101 09:22:10.944328  276088 cni.go:84] Creating CNI manager for ""
	I1101 09:22:10.944357  276088 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:22:10.944379  276088 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:22:10.944407  276088 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-846924 NodeName:kubernetes-upgrade-846924 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:22:10.944798  276088 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-846924"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:22:10.944974  276088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:22:10.956775  276088 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:22:10.956914  276088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:22:10.968441  276088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1101 09:22:10.987739  276088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:22:11.006195  276088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1101 09:22:11.024327  276088 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:22:11.029387  276088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:22:11.162953  276088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:22:11.181695  276088 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924 for IP: 192.168.85.2
	I1101 09:22:11.181723  276088 certs.go:195] generating shared ca certs ...
	I1101 09:22:11.181751  276088 certs.go:227] acquiring lock for ca certs: {Name:mkfdee6a84670347521013ebeef165551380cb9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:22:11.181943  276088 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key
	I1101 09:22:11.182004  276088 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key
	I1101 09:22:11.182018  276088 certs.go:257] generating profile certs ...
	I1101 09:22:11.182147  276088 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924/client.key
	I1101 09:22:11.182225  276088 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924/apiserver.key.916adac9
	I1101 09:22:11.182275  276088 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924/proxy-client.key
	I1101 09:22:11.182432  276088 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem (1338 bytes)
	W1101 09:22:11.182475  276088 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414_empty.pem, impossibly tiny 0 bytes
	I1101 09:22:11.182490  276088 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:22:11.182534  276088 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem (1078 bytes)
	I1101 09:22:11.182570  276088 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:22:11.182606  276088 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem (1675 bytes)
	I1101 09:22:11.182667  276088 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:22:11.183451  276088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:22:11.209884  276088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:22:11.236298  276088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:22:11.257992  276088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:22:11.282307  276088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1101 09:22:11.311550  276088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:22:11.340940  276088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:22:11.366187  276088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:22:11.393007  276088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:22:11.422259  276088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/9414.pem --> /usr/share/ca-certificates/9414.pem (1338 bytes)
	I1101 09:22:11.448507  276088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /usr/share/ca-certificates/94142.pem (1708 bytes)
	I1101 09:22:11.474106  276088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:22:11.492294  276088 ssh_runner.go:195] Run: openssl version
	I1101 09:22:11.501509  276088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:22:11.513227  276088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:22:11.519133  276088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:22:11.519193  276088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:22:11.563691  276088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:22:11.574504  276088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9414.pem && ln -fs /usr/share/ca-certificates/9414.pem /etc/ssl/certs/9414.pem"
	I1101 09:22:11.586411  276088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9414.pem
	I1101 09:22:11.590799  276088 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:35 /usr/share/ca-certificates/9414.pem
	I1101 09:22:11.590853  276088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9414.pem
	I1101 09:22:11.638725  276088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9414.pem /etc/ssl/certs/51391683.0"
	I1101 09:22:11.650344  276088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/94142.pem && ln -fs /usr/share/ca-certificates/94142.pem /etc/ssl/certs/94142.pem"
	I1101 09:22:11.664171  276088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/94142.pem
	I1101 09:22:11.669035  276088 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:35 /usr/share/ca-certificates/94142.pem
	I1101 09:22:11.669106  276088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/94142.pem
	I1101 09:22:11.724650  276088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/94142.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:22:11.737588  276088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:22:11.743583  276088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:22:11.797439  276088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:22:11.845361  276088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:22:11.887355  276088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:22:11.926103  276088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:22:11.972673  276088 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:22:12.023233  276088 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-846924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-846924 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:22:12.023340  276088 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:22:12.023415  276088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:22:12.062891  276088 cri.go:89] found id: "44126f4bf42e994de66bd876e7d445f9d3bd0d3c2a86f7edca0bb3811e4c5489"
	I1101 09:22:12.062916  276088 cri.go:89] found id: "643ea6e6d6ed926cd507fd2a48c01d0dfd41a1484066f5e8a7b04580f945b04a"
	I1101 09:22:12.062922  276088 cri.go:89] found id: "d158ab595b1c26904edb7f9e5cdd15d93c5ddee7c40c01d13344be4f4d4e0307"
	I1101 09:22:12.062931  276088 cri.go:89] found id: "36a09b0f45253b87f14ef94674e368b7ab5bcac11c0136c26fdffe4a49ee83f1"
	I1101 09:22:12.062935  276088 cri.go:89] found id: ""
	I1101 09:22:12.062992  276088 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:22:12.075848  276088 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:22:12Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:22:12.075973  276088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:22:12.085105  276088 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:22:12.085134  276088 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:22:12.085197  276088 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:22:12.096233  276088 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:22:12.097281  276088 kubeconfig.go:125] found "kubernetes-upgrade-846924" server: "https://192.168.85.2:8443"
	I1101 09:22:12.099596  276088 kapi.go:59] client config for kubernetes-upgrade-846924: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924/client.crt", KeyFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924/client.key", CAFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:22:12.100357  276088 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 09:22:12.100580  276088 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 09:22:12.100611  276088 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 09:22:12.100645  276088 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 09:22:12.100663  276088 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 09:22:12.101282  276088 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:22:12.112715  276088 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 09:22:12.112803  276088 kubeadm.go:602] duration metric: took 27.662048ms to restartPrimaryControlPlane
	I1101 09:22:12.112821  276088 kubeadm.go:403] duration metric: took 89.601052ms to StartCluster
	I1101 09:22:12.112844  276088 settings.go:142] acquiring lock: {Name:mkb1ba7d0d4bb15f3f0746ce486d72703f901580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:22:12.112940  276088 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:22:12.114162  276088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/kubeconfig: {Name:mk9d3795c1875d6d9ba81c81d9d8436a6b7942d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:22:12.114576  276088 config.go:182] Loaded profile config "kubernetes-upgrade-846924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:22:12.114626  276088 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:22:12.114701  276088 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:22:12.114825  276088 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-846924"
	I1101 09:22:12.114847  276088 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-846924"
	W1101 09:22:12.114895  276088 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:22:12.114934  276088 host.go:66] Checking if "kubernetes-upgrade-846924" exists ...
	I1101 09:22:12.114847  276088 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-846924"
	I1101 09:22:12.115003  276088 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-846924"
	I1101 09:22:12.115295  276088 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-846924 --format={{.State.Status}}
	I1101 09:22:12.115425  276088 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-846924 --format={{.State.Status}}
	I1101 09:22:12.118288  276088 out.go:179] * Verifying Kubernetes components...
	I1101 09:22:12.120018  276088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:22:12.144980  276088 kapi.go:59] client config for kubernetes-upgrade-846924: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924/client.crt", KeyFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924/client.key", CAFile:"/home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:22:12.145348  276088 addons.go:239] Setting addon default-storageclass=true in "kubernetes-upgrade-846924"
	W1101 09:22:12.145372  276088 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:22:12.145405  276088 host.go:66] Checking if "kubernetes-upgrade-846924" exists ...
	I1101 09:22:12.145946  276088 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-846924 --format={{.State.Status}}
	I1101 09:22:12.146458  276088 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:22:12.148092  276088 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:22:12.148114  276088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:22:12.148174  276088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-846924
	I1101 09:22:12.181269  276088 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:22:12.181294  276088 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:22:12.181360  276088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-846924
	I1101 09:22:12.182388  276088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/kubernetes-upgrade-846924/id_rsa Username:docker}
	I1101 09:22:12.209998  276088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/kubernetes-upgrade-846924/id_rsa Username:docker}
	I1101 09:22:12.265109  276088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:22:12.280376  276088 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:22:12.280471  276088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:22:12.294009  276088 api_server.go:72] duration metric: took 179.350741ms to wait for apiserver process to appear ...
	I1101 09:22:12.294045  276088 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:22:12.294073  276088 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 09:22:12.300115  276088 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 09:22:12.306038  276088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:22:12.307085  276088 api_server.go:141] control plane version: v1.34.1
	I1101 09:22:12.307110  276088 api_server.go:131] duration metric: took 13.058682ms to wait for apiserver health ...
	I1101 09:22:12.307119  276088 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:22:12.310740  276088 system_pods.go:59] 9 kube-system pods found
	I1101 09:22:12.310780  276088 system_pods.go:61] "coredns-66bc5c9577-6c464" [0f12c371-d8b0-4797-886b-88ad79d0048e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:22:12.310788  276088 system_pods.go:61] "coredns-66bc5c9577-x985f" [54ee05c3-277f-4227-95c3-987ba25931d9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:22:12.310798  276088 system_pods.go:61] "etcd-kubernetes-upgrade-846924" [b8c5fd34-3cee-4578-a8a7-f24b0ccf8445] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:22:12.310804  276088 system_pods.go:61] "kindnet-ggxpr" [a74e967b-3726-4cbb-a504-a82d8b0be533] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:22:12.310811  276088 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-846924" [6097c5ef-a73e-42b7-b94f-18ca1a1557c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:22:12.310818  276088 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-846924" [65dc23df-d9ee-4112-a996-a43c2e516995] Running
	I1101 09:22:12.310824  276088 system_pods.go:61] "kube-proxy-wgkv4" [5466937d-861c-4ae1-867c-41f269caa8b9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:22:12.310831  276088 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-846924" [8eeed041-27de-44a5-b423-b95e532319f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:22:12.310835  276088 system_pods.go:61] "storage-provisioner" [10713fba-b7c2-4716-8be0-e67b8d20cd10] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:22:12.310842  276088 system_pods.go:74] duration metric: took 3.718879ms to wait for pod list to return data ...
	I1101 09:22:12.310853  276088 kubeadm.go:587] duration metric: took 196.203551ms to wait for: map[apiserver:true system_pods:true]
	I1101 09:22:12.310881  276088 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:22:12.313325  276088 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:22:12.313357  276088 node_conditions.go:123] node cpu capacity is 8
	I1101 09:22:12.313376  276088 node_conditions.go:105] duration metric: took 2.489932ms to run NodePressure ...
	I1101 09:22:12.313390  276088 start.go:242] waiting for startup goroutines ...
	I1101 09:22:12.329409  276088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:22:12.793951  276088 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:22:12.795193  276088 addons.go:515] duration metric: took 680.501357ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:22:12.795239  276088 start.go:247] waiting for cluster config update ...
	I1101 09:22:12.795256  276088 start.go:256] writing updated cluster config ...
	I1101 09:22:12.795551  276088 ssh_runner.go:195] Run: rm -f paused
	I1101 09:22:12.846805  276088 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:22:12.848536  276088 out.go:179] * Done! kubectl is now configured to use "kubernetes-upgrade-846924" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.398602592Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.403154437Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=eae92eb3-a1c1-4547-8f57-295e692b202a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.404247203Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=801458d8-9ade-43e4-8ec1-b3824c82e45b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.405267556Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.406627681Z" level=info msg="Ran pod sandbox dc43290017f13f90c4408efeb952e630b28db67823fac5b2720f72305f36d014 with infra container: kube-system/kindnet-gjnst/POD" id=eae92eb3-a1c1-4547-8f57-295e692b202a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.40730219Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.408411428Z" level=info msg="Ran pod sandbox 5113f0e39ce07db884d501625a8e277af971b5cdd56c1606451f62ea70d8488f with infra container: kube-system/kube-proxy-wp2h9/POD" id=801458d8-9ade-43e4-8ec1-b3824c82e45b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.408677043Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=6ca27e73-c763-4a94-91a2-f72cf35b4bd3 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.410476877Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=efd1acd8-7263-4925-a8b5-9ee30af0ecf1 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.41307815Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d7aab9c6-1bad-4368-a305-2ef487ec8795 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.413408943Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3f3976bb-19a4-4c89-88cb-3198f4b6a716 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.415410568Z" level=info msg="Creating container: kube-system/kube-proxy-wp2h9/kube-proxy" id=df7840df-57a3-420d-80f1-d42d997ec85e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.4155454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.415955105Z" level=info msg="Creating container: kube-system/kindnet-gjnst/kindnet-cni" id=9e16a589-b337-4b17-b4c8-7a7c314b087a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.416077759Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.421220135Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.42351708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.425164458Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.42579519Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.460278509Z" level=info msg="Created container ecb09261806b38572384c6d5faf910d9ce8eb7cb6141a0ceaa69fc19f0400922: kube-system/kindnet-gjnst/kindnet-cni" id=9e16a589-b337-4b17-b4c8-7a7c314b087a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.461151046Z" level=info msg="Starting container: ecb09261806b38572384c6d5faf910d9ce8eb7cb6141a0ceaa69fc19f0400922" id=36e52c1e-c39c-400a-a131-21d3a31b61af name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.463309145Z" level=info msg="Started container" PID=1042 containerID=ecb09261806b38572384c6d5faf910d9ce8eb7cb6141a0ceaa69fc19f0400922 description=kube-system/kindnet-gjnst/kindnet-cni id=36e52c1e-c39c-400a-a131-21d3a31b61af name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc43290017f13f90c4408efeb952e630b28db67823fac5b2720f72305f36d014
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.465532034Z" level=info msg="Created container bc38b97b18bb036461b1e9e26d9368291053fc9a73c345e3d2f5c589e50b3cf9: kube-system/kube-proxy-wp2h9/kube-proxy" id=df7840df-57a3-420d-80f1-d42d997ec85e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.466467411Z" level=info msg="Starting container: bc38b97b18bb036461b1e9e26d9368291053fc9a73c345e3d2f5c589e50b3cf9" id=c1cebddd-9020-4d0a-8a25-63d2d5eccb30 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.470135462Z" level=info msg="Started container" PID=1043 containerID=bc38b97b18bb036461b1e9e26d9368291053fc9a73c345e3d2f5c589e50b3cf9 description=kube-system/kube-proxy-wp2h9/kube-proxy id=c1cebddd-9020-4d0a-8a25-63d2d5eccb30 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5113f0e39ce07db884d501625a8e277af971b5cdd56c1606451f62ea70d8488f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bc38b97b18bb0       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   5113f0e39ce07       kube-proxy-wp2h9                            kube-system
	ecb09261806b3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   dc43290017f13       kindnet-gjnst                               kube-system
	aad36a7a488fb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   30d18073cbf41       etcd-newest-cni-340756                      kube-system
	70c53a24cc729       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   0b5b077afb28c       kube-controller-manager-newest-cni-340756   kube-system
	373a67149dd37       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   1f32e010b0021       kube-apiserver-newest-cni-340756            kube-system
	5e4575672f000       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   60b4a24bcf5ce       kube-scheduler-newest-cni-340756            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-340756
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-340756
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=newest-cni-340756
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_21_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:21:35 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-340756
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:22:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:22:10 +0000   Sat, 01 Nov 2025 09:21:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:22:10 +0000   Sat, 01 Nov 2025 09:21:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:22:10 +0000   Sat, 01 Nov 2025 09:21:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 09:22:10 +0000   Sat, 01 Nov 2025 09:21:34 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-340756
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                036af85e-ee16-42ad-9d1f-24aa651c4f5c
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-340756                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-gjnst                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-newest-cni-340756             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-newest-cni-340756    200m (2%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-wp2h9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-newest-cni-340756             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 31s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node newest-cni-340756 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node newest-cni-340756 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x8 over 43s)  kubelet          Node newest-cni-340756 status is now: NodeHasSufficientPID
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s                kubelet          Node newest-cni-340756 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s                kubelet          Node newest-cni-340756 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s                kubelet          Node newest-cni-340756 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           34s                node-controller  Node newest-cni-340756 event: Registered Node newest-cni-340756 in Controller
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-340756 event: Registered Node newest-cni-340756 in Controller
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.047726] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[Nov 1 08:38] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.215674] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.047939] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000023] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023915] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023867] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023880] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000033] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023884] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +4.031524] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +8.255123] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	
	
	==> etcd [aad36a7a488fb62d76728cb3db23aa210d517cbd490ee24cc0c23c7d3785ffaa] <==
	{"level":"warn","ts":"2025-11-01T09:22:09.644078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.653719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.661654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.671599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.680156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.688849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.696811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.704766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.712043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.718778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.725708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.741766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.748992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.756377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.764979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.773653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.782111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.790510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.798389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.806681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.813670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.820210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.836536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.850572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.905004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36994","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:22:16 up  1:04,  0 user,  load average: 4.06, 2.99, 1.78
	Linux newest-cni-340756 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ecb09261806b38572384c6d5faf910d9ce8eb7cb6141a0ceaa69fc19f0400922] <==
	I1101 09:22:11.729362       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:22:11.729641       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 09:22:11.729970       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:22:11.729989       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:22:11.730011       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:22:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:22:11.933470       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:22:11.933523       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:22:11.933539       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:22:12.028152       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:22:12.234339       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:22:12.234363       1 metrics.go:72] Registering metrics
	I1101 09:22:12.234412       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [373a67149dd379dbe02a8dd2c5dd1346feb196b4cd96a4a446a405b296b37f88] <==
	I1101 09:22:10.414602       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:22:10.419571       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:22:10.428432       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:22:10.428494       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:22:10.428505       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 09:22:10.428537       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:22:10.428551       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:22:10.428559       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:22:10.428564       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:22:10.428567       1 cache.go:39] Caches are synced for autoregister controller
	E1101 09:22:10.444977       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:22:10.451596       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:22:10.461746       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:22:10.844154       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:22:10.881292       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:22:10.907889       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:22:10.921428       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:22:10.930132       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:22:10.985028       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.104.26"}
	I1101 09:22:11.003829       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.29.105"}
	I1101 09:22:11.322019       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:22:13.394929       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:22:13.442901       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:22:13.544018       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:22:13.544017       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [70c53a24cc729a3d1a8a2c6693f407f6cbfc3bef1804693e6b09d3b79a7a245a] <==
	I1101 09:22:13.017709       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:22:13.022014       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:22:13.022080       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:22:13.022112       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:22:13.022122       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:22:13.022127       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:22:13.029318       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:22:13.033665       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:22:13.040457       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 09:22:13.040491       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 09:22:13.040512       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:22:13.040551       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:22:13.041742       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:22:13.041784       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:22:13.041848       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:22:13.041892       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:22:13.042012       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-340756"
	I1101 09:22:13.042070       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 09:22:13.045495       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:22:13.045762       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:22:13.047970       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:22:13.049110       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 09:22:13.056304       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:22:13.064533       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:22:13.069025       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [bc38b97b18bb036461b1e9e26d9368291053fc9a73c345e3d2f5c589e50b3cf9] <==
	I1101 09:22:11.516856       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:22:11.577505       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:22:11.678155       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:22:11.678208       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1101 09:22:11.678307       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:22:11.699353       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:22:11.699443       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:22:11.709604       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:22:11.714282       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:22:11.714321       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:22:11.719282       1 config.go:200] "Starting service config controller"
	I1101 09:22:11.719307       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:22:11.719333       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:22:11.719338       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:22:11.719354       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:22:11.719360       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:22:11.720212       1 config.go:309] "Starting node config controller"
	I1101 09:22:11.720248       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:22:11.821742       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:22:11.821793       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:22:11.821830       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:22:11.821917       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [5e4575672f000ae294315e642c93643f4cc9fc2335f9213649ae518477bdd2f6] <==
	I1101 09:22:09.029728       1 serving.go:386] Generated self-signed cert in-memory
	W1101 09:22:10.335239       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:22:10.335286       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:22:10.335308       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:22:10.335319       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:22:10.364549       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:22:10.370892       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:22:10.374445       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:22:10.374483       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:22:10.375529       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:22:10.375598       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 09:22:10.387889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1101 09:22:10.475225       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: E1101 09:22:10.148523     649 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-340756\" not found" node="newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: E1101 09:22:10.149558     649 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-340756\" not found" node="newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: I1101 09:22:10.395487     649 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: I1101 09:22:10.472553     649 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: I1101 09:22:10.472669     649 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: I1101 09:22:10.472707     649 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: I1101 09:22:10.473848     649 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: E1101 09:22:10.513424     649 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-340756\" already exists" pod="kube-system/kube-apiserver-newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: I1101 09:22:10.513470     649 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: E1101 09:22:10.526622     649 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-340756\" already exists" pod="kube-system/kube-controller-manager-newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: I1101 09:22:10.526683     649 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: E1101 09:22:10.536602     649 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-340756\" already exists" pod="kube-system/kube-scheduler-newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: I1101 09:22:10.536681     649 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: E1101 09:22:10.549875     649 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-340756\" already exists" pod="kube-system/etcd-newest-cni-340756"
	Nov 01 09:22:11 newest-cni-340756 kubelet[649]: I1101 09:22:11.088404     649 apiserver.go:52] "Watching apiserver"
	Nov 01 09:22:11 newest-cni-340756 kubelet[649]: I1101 09:22:11.094240     649 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 09:22:11 newest-cni-340756 kubelet[649]: I1101 09:22:11.164039     649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6a908ac-4dfb-4f1c-8059-79695562a817-xtables-lock\") pod \"kube-proxy-wp2h9\" (UID: \"e6a908ac-4dfb-4f1c-8059-79695562a817\") " pod="kube-system/kube-proxy-wp2h9"
	Nov 01 09:22:11 newest-cni-340756 kubelet[649]: I1101 09:22:11.164087     649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9c4e4a33-eff1-47ec-94bc-7f9196c547ff-cni-cfg\") pod \"kindnet-gjnst\" (UID: \"9c4e4a33-eff1-47ec-94bc-7f9196c547ff\") " pod="kube-system/kindnet-gjnst"
	Nov 01 09:22:11 newest-cni-340756 kubelet[649]: I1101 09:22:11.164115     649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c4e4a33-eff1-47ec-94bc-7f9196c547ff-xtables-lock\") pod \"kindnet-gjnst\" (UID: \"9c4e4a33-eff1-47ec-94bc-7f9196c547ff\") " pod="kube-system/kindnet-gjnst"
	Nov 01 09:22:11 newest-cni-340756 kubelet[649]: I1101 09:22:11.164139     649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c4e4a33-eff1-47ec-94bc-7f9196c547ff-lib-modules\") pod \"kindnet-gjnst\" (UID: \"9c4e4a33-eff1-47ec-94bc-7f9196c547ff\") " pod="kube-system/kindnet-gjnst"
	Nov 01 09:22:11 newest-cni-340756 kubelet[649]: I1101 09:22:11.164200     649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6a908ac-4dfb-4f1c-8059-79695562a817-lib-modules\") pod \"kube-proxy-wp2h9\" (UID: \"e6a908ac-4dfb-4f1c-8059-79695562a817\") " pod="kube-system/kube-proxy-wp2h9"
	Nov 01 09:22:12 newest-cni-340756 kubelet[649]: I1101 09:22:12.826502     649 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 01 09:22:12 newest-cni-340756 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:22:12 newest-cni-340756 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:22:12 newest-cni-340756 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-340756 -n newest-cni-340756
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-340756 -n newest-cni-340756: exit status 2 (385.34057ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-340756 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-tmnp2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-gcggq kubernetes-dashboard-855c9754f9-jqdvd
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-340756 describe pod coredns-66bc5c9577-tmnp2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-gcggq kubernetes-dashboard-855c9754f9-jqdvd
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-340756 describe pod coredns-66bc5c9577-tmnp2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-gcggq kubernetes-dashboard-855c9754f9-jqdvd: exit status 1 (65.046195ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-tmnp2" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-gcggq" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-jqdvd" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-340756 describe pod coredns-66bc5c9577-tmnp2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-gcggq kubernetes-dashboard-855c9754f9-jqdvd: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-340756
helpers_test.go:243: (dbg) docker inspect newest-cni-340756:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b",
	        "Created": "2025-11-01T09:21:23.482376732Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 273823,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:22:00.001656371Z",
	            "FinishedAt": "2025-11-01T09:21:58.895172046Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b/hostname",
	        "HostsPath": "/var/lib/docker/containers/9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b/hosts",
	        "LogPath": "/var/lib/docker/containers/9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b/9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b-json.log",
	        "Name": "/newest-cni-340756",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-340756:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-340756",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9977e921720f0f30775d6c9164d67ea2db04ff6c50519da751c92db9fc5fc85b",
	                "LowerDir": "/var/lib/docker/overlay2/79a5b3fa0361a2a9c5d3edbeca3366aecf897b34708fba6c670fef7311204878-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/79a5b3fa0361a2a9c5d3edbeca3366aecf897b34708fba6c670fef7311204878/merged",
	                "UpperDir": "/var/lib/docker/overlay2/79a5b3fa0361a2a9c5d3edbeca3366aecf897b34708fba6c670fef7311204878/diff",
	                "WorkDir": "/var/lib/docker/overlay2/79a5b3fa0361a2a9c5d3edbeca3366aecf897b34708fba6c670fef7311204878/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-340756",
	                "Source": "/var/lib/docker/volumes/newest-cni-340756/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-340756",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-340756",
	                "name.minikube.sigs.k8s.io": "newest-cni-340756",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "561fb00f1940aa01fcb0a4250de8744e1523a36f72770fbfc5a55c9c7786e3a3",
	            "SandboxKey": "/var/run/docker/netns/561fb00f1940",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-340756": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:a0:bd:11:5f:58",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6d98c8d1b523eaf92b0807c4ccbd2e833f29938a64f5a83fb094948eae42b694",
	                    "EndpointID": "9c129a9335972736767b14a5193462aee8d2af40aeb547815cf73930f371c7df",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-340756",
	                        "9977e921720f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-340756 -n newest-cni-340756
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-340756 -n newest-cni-340756: exit status 2 (348.102391ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-340756 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-340756 logs -n 25: (1.695233192s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-397460 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ delete  │ -p old-k8s-version-152344                                                                                                                                                                                                                     │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p old-k8s-version-152344                                                                                                                                                                                                                     │ old-k8s-version-152344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p no-preload-397460                                                                                                                                                                                                                          │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p disable-driver-mounts-366530                                                                                                                                                                                                               │ disable-driver-mounts-366530 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ start   │ -p default-k8s-diff-port-648641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-648641 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p no-preload-397460                                                                                                                                                                                                                          │ no-preload-397460            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ start   │ -p newest-cni-340756 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-340756 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ stop    │ -p newest-cni-340756 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ image   │ embed-certs-236314 image list --format=json                                                                                                                                                                                                   │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ pause   │ -p embed-certs-236314 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-340756 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ start   │ -p newest-cni-340756 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:22 UTC │
	│ delete  │ -p embed-certs-236314                                                                                                                                                                                                                         │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │ 01 Nov 25 09:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-648641 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-648641 │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │                     │
	│ start   │ -p kubernetes-upgrade-846924 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-846924    │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │                     │
	│ start   │ -p kubernetes-upgrade-846924 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-846924    │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │ 01 Nov 25 09:22 UTC │
	│ delete  │ -p embed-certs-236314                                                                                                                                                                                                                         │ embed-certs-236314           │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │ 01 Nov 25 09:22 UTC │
	│ start   │ -p auto-204434 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-204434                  │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-648641 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-648641 │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │                     │
	│ image   │ newest-cni-340756 image list --format=json                                                                                                                                                                                                    │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │ 01 Nov 25 09:22 UTC │
	│ pause   │ -p newest-cni-340756 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-340756            │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-846924                                                                                                                                                                                                                  │ kubernetes-upgrade-846924    │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │ 01 Nov 25 09:22 UTC │
	│ start   │ -p kindnet-204434 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:22:15
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:22:15.476544  281046 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:22:15.476686  281046 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:22:15.476693  281046 out.go:374] Setting ErrFile to fd 2...
	I1101 09:22:15.476700  281046 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:22:15.477054  281046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:22:15.477735  281046 out.go:368] Setting JSON to false
	I1101 09:22:15.479316  281046 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3883,"bootTime":1761985052,"procs":274,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:22:15.479461  281046 start.go:143] virtualization: kvm guest
	I1101 09:22:15.482884  281046 out.go:179] * [kindnet-204434] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:22:15.484576  281046 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:22:15.484678  281046 notify.go:221] Checking for updates...
	I1101 09:22:15.490420  281046 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:22:15.491637  281046 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:22:15.495113  281046 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:22:15.496554  281046 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:22:15.497964  281046 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:22:15.499944  281046 config.go:182] Loaded profile config "auto-204434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:22:15.500115  281046 config.go:182] Loaded profile config "default-k8s-diff-port-648641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:22:15.500256  281046 config.go:182] Loaded profile config "newest-cni-340756": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:22:15.500370  281046 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:22:15.529232  281046 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:22:15.529411  281046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:22:15.593438  281046 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-01 09:22:15.583144193 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:22:15.593568  281046 docker.go:319] overlay module found
	I1101 09:22:15.596016  281046 out.go:179] * Using the docker driver based on user configuration
	I1101 09:22:15.597166  281046 start.go:309] selected driver: docker
	I1101 09:22:15.597184  281046 start.go:930] validating driver "docker" against <nil>
	I1101 09:22:15.597196  281046 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:22:15.597809  281046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:22:15.664348  281046 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-01 09:22:15.653373304 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:22:15.664586  281046 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:22:15.664913  281046 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:22:15.666802  281046 out.go:179] * Using Docker driver with root privileges
	I1101 09:22:15.668630  281046 cni.go:84] Creating CNI manager for "kindnet"
	I1101 09:22:15.668656  281046 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:22:15.668751  281046 start.go:353] cluster config:
	{Name:kindnet-204434 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-204434 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:22:15.670806  281046 out.go:179] * Starting "kindnet-204434" primary control-plane node in "kindnet-204434" cluster
	I1101 09:22:15.672008  281046 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:22:15.674849  281046 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:22:15.676071  281046 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:22:15.676126  281046 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:22:15.676143  281046 cache.go:59] Caching tarball of preloaded images
	I1101 09:22:15.676165  281046 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:22:15.676247  281046 preload.go:233] Found /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:22:15.676261  281046 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:22:15.676402  281046 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kindnet-204434/config.json ...
	I1101 09:22:15.676430  281046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kindnet-204434/config.json: {Name:mkddef7aa0c672ea8e45d51c8103ae9bfa528a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:22:15.703158  281046 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:22:15.703183  281046 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:22:15.703202  281046 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:22:15.703229  281046 start.go:360] acquireMachinesLock for kindnet-204434: {Name:mk8c0426d81d005e71a7feb04302ee2c409b9f0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:22:15.703335  281046 start.go:364] duration metric: took 86.524µs to acquireMachinesLock for "kindnet-204434"
	I1101 09:22:15.703368  281046 start.go:93] Provisioning new machine with config: &{Name:kindnet-204434 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-204434 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:22:15.703459  281046 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:22:10.937538  276407 cli_runner.go:164] Run: docker container inspect auto-204434 --format={{.State.Running}}
	I1101 09:22:10.961591  276407 cli_runner.go:164] Run: docker container inspect auto-204434 --format={{.State.Status}}
	I1101 09:22:10.987556  276407 cli_runner.go:164] Run: docker exec auto-204434 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:22:11.042001  276407 oci.go:144] the created container "auto-204434" has a running status.
	I1101 09:22:11.042052  276407 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/auto-204434/id_rsa...
	I1101 09:22:11.150997  276407 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21835-5913/.minikube/machines/auto-204434/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:22:11.183323  276407 cli_runner.go:164] Run: docker container inspect auto-204434 --format={{.State.Status}}
	I1101 09:22:11.208922  276407 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:22:11.208945  276407 kic_runner.go:114] Args: [docker exec --privileged auto-204434 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:22:11.263503  276407 cli_runner.go:164] Run: docker container inspect auto-204434 --format={{.State.Status}}
	I1101 09:22:11.289965  276407 machine.go:94] provisionDockerMachine start ...
	I1101 09:22:11.290258  276407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-204434
	I1101 09:22:11.318884  276407 main.go:143] libmachine: Using SSH client type: native
	I1101 09:22:11.319200  276407 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1101 09:22:11.319216  276407 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:22:11.319910  276407 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55508->127.0.0.1:33098: read: connection reset by peer
	I1101 09:22:14.465593  276407 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-204434
	
	I1101 09:22:14.465623  276407 ubuntu.go:182] provisioning hostname "auto-204434"
	I1101 09:22:14.465680  276407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-204434
	I1101 09:22:14.485838  276407 main.go:143] libmachine: Using SSH client type: native
	I1101 09:22:14.486152  276407 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1101 09:22:14.486174  276407 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-204434 && echo "auto-204434" | sudo tee /etc/hostname
	I1101 09:22:14.638709  276407 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-204434
	
	I1101 09:22:14.638791  276407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-204434
	I1101 09:22:14.657832  276407 main.go:143] libmachine: Using SSH client type: native
	I1101 09:22:14.658104  276407 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1101 09:22:14.658126  276407 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-204434' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-204434/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-204434' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:22:14.814007  276407 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:22:14.814040  276407 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5913/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5913/.minikube}
	I1101 09:22:14.814066  276407 ubuntu.go:190] setting up certificates
	I1101 09:22:14.814082  276407 provision.go:84] configureAuth start
	I1101 09:22:14.814146  276407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-204434
	I1101 09:22:14.838529  276407 provision.go:143] copyHostCerts
	I1101 09:22:14.838634  276407 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem, removing ...
	I1101 09:22:14.838645  276407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem
	I1101 09:22:14.838706  276407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/ca.pem (1078 bytes)
	I1101 09:22:14.838853  276407 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem, removing ...
	I1101 09:22:14.838878  276407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem
	I1101 09:22:14.838932  276407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/cert.pem (1123 bytes)
	I1101 09:22:14.839033  276407 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem, removing ...
	I1101 09:22:14.839057  276407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem
	I1101 09:22:14.839105  276407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5913/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5913/.minikube/key.pem (1675 bytes)
	I1101 09:22:14.839187  276407 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca-key.pem org=jenkins.auto-204434 san=[127.0.0.1 192.168.76.2 auto-204434 localhost minikube]
	I1101 09:22:14.980788  276407 provision.go:177] copyRemoteCerts
	I1101 09:22:14.980856  276407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:22:14.980913  276407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-204434
	I1101 09:22:15.002258  276407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/auto-204434/id_rsa Username:docker}
	I1101 09:22:15.107788  276407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 09:22:15.132209  276407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 09:22:15.151563  276407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:22:15.171505  276407 provision.go:87] duration metric: took 357.406821ms to configureAuth
	I1101 09:22:15.171536  276407 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:22:15.171696  276407 config.go:182] Loaded profile config "auto-204434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:22:15.171792  276407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-204434
	I1101 09:22:15.193988  276407 main.go:143] libmachine: Using SSH client type: native
	I1101 09:22:15.194285  276407 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1101 09:22:15.194310  276407 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:22:15.485352  276407 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:22:15.485405  276407 machine.go:97] duration metric: took 4.195391694s to provisionDockerMachine
	I1101 09:22:15.485421  276407 client.go:176] duration metric: took 9.508581208s to LocalClient.Create
	I1101 09:22:15.485441  276407 start.go:167] duration metric: took 9.508649416s to libmachine.API.Create "auto-204434"
	I1101 09:22:15.485453  276407 start.go:293] postStartSetup for "auto-204434" (driver="docker")
	I1101 09:22:15.485465  276407 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:22:15.485550  276407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:22:15.485604  276407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-204434
	I1101 09:22:15.508715  276407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/auto-204434/id_rsa Username:docker}
	I1101 09:22:15.618470  276407 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:22:15.623591  276407 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:22:15.623627  276407 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:22:15.623642  276407 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/addons for local assets ...
	I1101 09:22:15.623702  276407 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5913/.minikube/files for local assets ...
	I1101 09:22:15.623828  276407 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem -> 94142.pem in /etc/ssl/certs
	I1101 09:22:15.624070  276407 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:22:15.633412  276407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/ssl/certs/94142.pem --> /etc/ssl/certs/94142.pem (1708 bytes)
	I1101 09:22:15.657695  276407 start.go:296] duration metric: took 172.225179ms for postStartSetup
	I1101 09:22:15.658123  276407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-204434
	I1101 09:22:15.678840  276407 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/auto-204434/config.json ...
	I1101 09:22:15.679168  276407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:22:15.679236  276407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-204434
	I1101 09:22:15.700621  276407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/auto-204434/id_rsa Username:docker}
	
	
	==> CRI-O <==
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.398602592Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.403154437Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=eae92eb3-a1c1-4547-8f57-295e692b202a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.404247203Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=801458d8-9ade-43e4-8ec1-b3824c82e45b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.405267556Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.406627681Z" level=info msg="Ran pod sandbox dc43290017f13f90c4408efeb952e630b28db67823fac5b2720f72305f36d014 with infra container: kube-system/kindnet-gjnst/POD" id=eae92eb3-a1c1-4547-8f57-295e692b202a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.40730219Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.408411428Z" level=info msg="Ran pod sandbox 5113f0e39ce07db884d501625a8e277af971b5cdd56c1606451f62ea70d8488f with infra container: kube-system/kube-proxy-wp2h9/POD" id=801458d8-9ade-43e4-8ec1-b3824c82e45b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.408677043Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=6ca27e73-c763-4a94-91a2-f72cf35b4bd3 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.410476877Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=efd1acd8-7263-4925-a8b5-9ee30af0ecf1 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.41307815Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d7aab9c6-1bad-4368-a305-2ef487ec8795 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.413408943Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3f3976bb-19a4-4c89-88cb-3198f4b6a716 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.415410568Z" level=info msg="Creating container: kube-system/kube-proxy-wp2h9/kube-proxy" id=df7840df-57a3-420d-80f1-d42d997ec85e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.4155454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.415955105Z" level=info msg="Creating container: kube-system/kindnet-gjnst/kindnet-cni" id=9e16a589-b337-4b17-b4c8-7a7c314b087a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.416077759Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.421220135Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.42351708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.425164458Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.42579519Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.460278509Z" level=info msg="Created container ecb09261806b38572384c6d5faf910d9ce8eb7cb6141a0ceaa69fc19f0400922: kube-system/kindnet-gjnst/kindnet-cni" id=9e16a589-b337-4b17-b4c8-7a7c314b087a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.461151046Z" level=info msg="Starting container: ecb09261806b38572384c6d5faf910d9ce8eb7cb6141a0ceaa69fc19f0400922" id=36e52c1e-c39c-400a-a131-21d3a31b61af name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.463309145Z" level=info msg="Started container" PID=1042 containerID=ecb09261806b38572384c6d5faf910d9ce8eb7cb6141a0ceaa69fc19f0400922 description=kube-system/kindnet-gjnst/kindnet-cni id=36e52c1e-c39c-400a-a131-21d3a31b61af name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc43290017f13f90c4408efeb952e630b28db67823fac5b2720f72305f36d014
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.465532034Z" level=info msg="Created container bc38b97b18bb036461b1e9e26d9368291053fc9a73c345e3d2f5c589e50b3cf9: kube-system/kube-proxy-wp2h9/kube-proxy" id=df7840df-57a3-420d-80f1-d42d997ec85e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.466467411Z" level=info msg="Starting container: bc38b97b18bb036461b1e9e26d9368291053fc9a73c345e3d2f5c589e50b3cf9" id=c1cebddd-9020-4d0a-8a25-63d2d5eccb30 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:22:11 newest-cni-340756 crio[512]: time="2025-11-01T09:22:11.470135462Z" level=info msg="Started container" PID=1043 containerID=bc38b97b18bb036461b1e9e26d9368291053fc9a73c345e3d2f5c589e50b3cf9 description=kube-system/kube-proxy-wp2h9/kube-proxy id=c1cebddd-9020-4d0a-8a25-63d2d5eccb30 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5113f0e39ce07db884d501625a8e277af971b5cdd56c1606451f62ea70d8488f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bc38b97b18bb0       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   7 seconds ago       Running             kube-proxy                1                   5113f0e39ce07       kube-proxy-wp2h9                            kube-system
	ecb09261806b3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   7 seconds ago       Running             kindnet-cni               1                   dc43290017f13       kindnet-gjnst                               kube-system
	aad36a7a488fb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   10 seconds ago      Running             etcd                      1                   30d18073cbf41       etcd-newest-cni-340756                      kube-system
	70c53a24cc729       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   10 seconds ago      Running             kube-controller-manager   1                   0b5b077afb28c       kube-controller-manager-newest-cni-340756   kube-system
	373a67149dd37       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   10 seconds ago      Running             kube-apiserver            1                   1f32e010b0021       kube-apiserver-newest-cni-340756            kube-system
	5e4575672f000       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   10 seconds ago      Running             kube-scheduler            1                   60b4a24bcf5ce       kube-scheduler-newest-cni-340756            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-340756
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-340756
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=newest-cni-340756
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_21_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:21:35 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-340756
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:22:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:22:10 +0000   Sat, 01 Nov 2025 09:21:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:22:10 +0000   Sat, 01 Nov 2025 09:21:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:22:10 +0000   Sat, 01 Nov 2025 09:21:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 09:22:10 +0000   Sat, 01 Nov 2025 09:21:34 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-340756
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                036af85e-ee16-42ad-9d1f-24aa651c4f5c
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-340756                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-gjnst                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      35s
	  kube-system                 kube-apiserver-newest-cni-340756             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-newest-cni-340756    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-wp2h9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-scheduler-newest-cni-340756             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 34s                kube-proxy       
	  Normal  Starting                 7s                 kube-proxy       
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node newest-cni-340756 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node newest-cni-340756 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node newest-cni-340756 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s                kubelet          Node newest-cni-340756 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet          Node newest-cni-340756 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet          Node newest-cni-340756 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s                node-controller  Node newest-cni-340756 event: Registered Node newest-cni-340756 in Controller
	  Normal  RegisteredNode           5s                 node-controller  Node newest-cni-340756 event: Registered Node newest-cni-340756 in Controller
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.047726] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[Nov 1 08:38] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.215674] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.047939] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000023] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023915] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023867] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023880] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000033] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023884] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +4.031524] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +8.255123] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	
	
	==> etcd [aad36a7a488fb62d76728cb3db23aa210d517cbd490ee24cc0c23c7d3785ffaa] <==
	{"level":"warn","ts":"2025-11-01T09:22:09.644078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.653719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.661654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.671599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.680156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.688849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.696811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.704766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.712043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.718778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.725708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.741766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.748992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.756377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.764979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.773653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.782111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.790510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.798389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.806681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.813670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.820210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.836536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.850572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:09.905004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36994","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:22:18 up  1:04,  0 user,  load average: 3.97, 2.99, 1.79
	Linux newest-cni-340756 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ecb09261806b38572384c6d5faf910d9ce8eb7cb6141a0ceaa69fc19f0400922] <==
	I1101 09:22:11.729362       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:22:11.729641       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 09:22:11.729970       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:22:11.729989       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:22:11.730011       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:22:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:22:11.933470       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:22:11.933523       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:22:11.933539       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:22:12.028152       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:22:12.234339       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:22:12.234363       1 metrics.go:72] Registering metrics
	I1101 09:22:12.234412       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [373a67149dd379dbe02a8dd2c5dd1346feb196b4cd96a4a446a405b296b37f88] <==
	I1101 09:22:10.414602       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:22:10.419571       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:22:10.428432       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:22:10.428494       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:22:10.428505       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 09:22:10.428537       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:22:10.428551       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:22:10.428559       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:22:10.428564       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:22:10.428567       1 cache.go:39] Caches are synced for autoregister controller
	E1101 09:22:10.444977       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:22:10.451596       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:22:10.461746       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:22:10.844154       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:22:10.881292       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:22:10.907889       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:22:10.921428       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:22:10.930132       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:22:10.985028       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.104.26"}
	I1101 09:22:11.003829       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.29.105"}
	I1101 09:22:11.322019       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:22:13.394929       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:22:13.442901       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:22:13.544018       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:22:13.544017       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [70c53a24cc729a3d1a8a2c6693f407f6cbfc3bef1804693e6b09d3b79a7a245a] <==
	I1101 09:22:13.017709       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:22:13.022014       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:22:13.022080       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:22:13.022112       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:22:13.022122       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:22:13.022127       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:22:13.029318       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:22:13.033665       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:22:13.040457       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 09:22:13.040491       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 09:22:13.040512       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:22:13.040551       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:22:13.041742       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:22:13.041784       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:22:13.041848       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:22:13.041892       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:22:13.042012       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-340756"
	I1101 09:22:13.042070       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 09:22:13.045495       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:22:13.045762       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:22:13.047970       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:22:13.049110       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 09:22:13.056304       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:22:13.064533       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:22:13.069025       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [bc38b97b18bb036461b1e9e26d9368291053fc9a73c345e3d2f5c589e50b3cf9] <==
	I1101 09:22:11.516856       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:22:11.577505       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:22:11.678155       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:22:11.678208       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1101 09:22:11.678307       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:22:11.699353       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:22:11.699443       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:22:11.709604       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:22:11.714282       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:22:11.714321       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:22:11.719282       1 config.go:200] "Starting service config controller"
	I1101 09:22:11.719307       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:22:11.719333       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:22:11.719338       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:22:11.719354       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:22:11.719360       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:22:11.720212       1 config.go:309] "Starting node config controller"
	I1101 09:22:11.720248       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:22:11.821742       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:22:11.821793       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:22:11.821830       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:22:11.821917       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [5e4575672f000ae294315e642c93643f4cc9fc2335f9213649ae518477bdd2f6] <==
	I1101 09:22:09.029728       1 serving.go:386] Generated self-signed cert in-memory
	W1101 09:22:10.335239       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:22:10.335286       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:22:10.335308       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:22:10.335319       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:22:10.364549       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:22:10.370892       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:22:10.374445       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:22:10.374483       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:22:10.375529       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:22:10.375598       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 09:22:10.387889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1101 09:22:10.475225       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: E1101 09:22:10.148523     649 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-340756\" not found" node="newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: E1101 09:22:10.149558     649 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-340756\" not found" node="newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: I1101 09:22:10.395487     649 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: I1101 09:22:10.472553     649 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: I1101 09:22:10.472669     649 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: I1101 09:22:10.472707     649 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: I1101 09:22:10.473848     649 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: E1101 09:22:10.513424     649 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-340756\" already exists" pod="kube-system/kube-apiserver-newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: I1101 09:22:10.513470     649 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: E1101 09:22:10.526622     649 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-340756\" already exists" pod="kube-system/kube-controller-manager-newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: I1101 09:22:10.526683     649 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: E1101 09:22:10.536602     649 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-340756\" already exists" pod="kube-system/kube-scheduler-newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: I1101 09:22:10.536681     649 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-340756"
	Nov 01 09:22:10 newest-cni-340756 kubelet[649]: E1101 09:22:10.549875     649 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-340756\" already exists" pod="kube-system/etcd-newest-cni-340756"
	Nov 01 09:22:11 newest-cni-340756 kubelet[649]: I1101 09:22:11.088404     649 apiserver.go:52] "Watching apiserver"
	Nov 01 09:22:11 newest-cni-340756 kubelet[649]: I1101 09:22:11.094240     649 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 09:22:11 newest-cni-340756 kubelet[649]: I1101 09:22:11.164039     649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6a908ac-4dfb-4f1c-8059-79695562a817-xtables-lock\") pod \"kube-proxy-wp2h9\" (UID: \"e6a908ac-4dfb-4f1c-8059-79695562a817\") " pod="kube-system/kube-proxy-wp2h9"
	Nov 01 09:22:11 newest-cni-340756 kubelet[649]: I1101 09:22:11.164087     649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9c4e4a33-eff1-47ec-94bc-7f9196c547ff-cni-cfg\") pod \"kindnet-gjnst\" (UID: \"9c4e4a33-eff1-47ec-94bc-7f9196c547ff\") " pod="kube-system/kindnet-gjnst"
	Nov 01 09:22:11 newest-cni-340756 kubelet[649]: I1101 09:22:11.164115     649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c4e4a33-eff1-47ec-94bc-7f9196c547ff-xtables-lock\") pod \"kindnet-gjnst\" (UID: \"9c4e4a33-eff1-47ec-94bc-7f9196c547ff\") " pod="kube-system/kindnet-gjnst"
	Nov 01 09:22:11 newest-cni-340756 kubelet[649]: I1101 09:22:11.164139     649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c4e4a33-eff1-47ec-94bc-7f9196c547ff-lib-modules\") pod \"kindnet-gjnst\" (UID: \"9c4e4a33-eff1-47ec-94bc-7f9196c547ff\") " pod="kube-system/kindnet-gjnst"
	Nov 01 09:22:11 newest-cni-340756 kubelet[649]: I1101 09:22:11.164200     649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6a908ac-4dfb-4f1c-8059-79695562a817-lib-modules\") pod \"kube-proxy-wp2h9\" (UID: \"e6a908ac-4dfb-4f1c-8059-79695562a817\") " pod="kube-system/kube-proxy-wp2h9"
	Nov 01 09:22:12 newest-cni-340756 kubelet[649]: I1101 09:22:12.826502     649 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 01 09:22:12 newest-cni-340756 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:22:12 newest-cni-340756 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:22:12 newest-cni-340756 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-340756 -n newest-cni-340756
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-340756 -n newest-cni-340756: exit status 2 (361.365105ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-340756 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-tmnp2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-gcggq kubernetes-dashboard-855c9754f9-jqdvd
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-340756 describe pod coredns-66bc5c9577-tmnp2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-gcggq kubernetes-dashboard-855c9754f9-jqdvd
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-340756 describe pod coredns-66bc5c9577-tmnp2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-gcggq kubernetes-dashboard-855c9754f9-jqdvd: exit status 1 (67.880678ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-tmnp2" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-gcggq" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-jqdvd" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-340756 describe pod coredns-66bc5c9577-tmnp2 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-gcggq kubernetes-dashboard-855c9754f9-jqdvd: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-648641 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-648641 --alsologtostderr -v=1: exit status 80 (1.739410778s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-648641 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:23:24.043434  300613 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:23:24.043607  300613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:23:24.043624  300613 out.go:374] Setting ErrFile to fd 2...
	I1101 09:23:24.043630  300613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:23:24.043985  300613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:23:24.044366  300613 out.go:368] Setting JSON to false
	I1101 09:23:24.044433  300613 mustload.go:66] Loading cluster: default-k8s-diff-port-648641
	I1101 09:23:24.045004  300613 config.go:182] Loaded profile config "default-k8s-diff-port-648641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:23:24.045611  300613 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-648641 --format={{.State.Status}}
	I1101 09:23:24.071092  300613 host.go:66] Checking if "default-k8s-diff-port-648641" exists ...
	I1101 09:23:24.071459  300613 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:23:24.148539  300613 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:96 SystemTime:2025-11-01 09:23:24.134296041 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:23:24.149322  300613 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-648641 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 09:23:24.151396  300613 out.go:179] * Pausing node default-k8s-diff-port-648641 ... 
	I1101 09:23:24.152703  300613 host.go:66] Checking if "default-k8s-diff-port-648641" exists ...
	I1101 09:23:24.153070  300613 ssh_runner.go:195] Run: systemctl --version
	I1101 09:23:24.153129  300613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-648641
	I1101 09:23:24.174922  300613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/default-k8s-diff-port-648641/id_rsa Username:docker}
	I1101 09:23:24.282131  300613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:23:24.312933  300613 pause.go:52] kubelet running: true
	I1101 09:23:24.313014  300613 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:23:24.539431  300613 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:23:24.539513  300613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:23:24.621363  300613 cri.go:89] found id: "681c90b1a234a7b0e12da1b109f0109f2d356035b96075b92f31b7a62e17be33"
	I1101 09:23:24.621389  300613 cri.go:89] found id: "3b82436ea2d5700e9c28432df7c0c8995bb11039b6647b93defe4dff2a8dee15"
	I1101 09:23:24.621395  300613 cri.go:89] found id: "21cef8656a25dce4efb27f988e8a3cc9dce09db0fc84534eef074135f376089e"
	I1101 09:23:24.621400  300613 cri.go:89] found id: "283fa2a3fb085218f6af5b72a8cb10747dc821167ca6211f52463b3ce9a3d074"
	I1101 09:23:24.621404  300613 cri.go:89] found id: "bcf6725cf8f1856ef02c34b30c6b1276953d6f898448506f89f239326f5a432f"
	I1101 09:23:24.621409  300613 cri.go:89] found id: "9f9ab169c19541b266c8fd479fd93eb23c6c19f093c940d8f847e81ae5f10c2a"
	I1101 09:23:24.621413  300613 cri.go:89] found id: "3a70839c32b1cecca0edb91a2d14108dcb786cb224d520bf5ad312290fa6eb4d"
	I1101 09:23:24.621417  300613 cri.go:89] found id: "d2082d0328a772edb9696b1f0f12f5a201f1f4a3026e4d11e8ca74c484edb87e"
	I1101 09:23:24.621422  300613 cri.go:89] found id: "d66f5b99588e058579c26aebe3a8b228526e364d4d10824def10bbd5d58fe3b1"
	I1101 09:23:24.621433  300613 cri.go:89] found id: "472b8c61b8d9de3cfc6073493b5e39efb4fa16b1d084894a85d78ac191365539"
	I1101 09:23:24.621441  300613 cri.go:89] found id: "fddb3c973428f71ea46a6abdb4fa01b2d9bf2ce8c6c1755b890ee66f7e28e5d6"
	I1101 09:23:24.621445  300613 cri.go:89] found id: ""
	I1101 09:23:24.621494  300613 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:23:24.635352  300613 retry.go:31] will retry after 181.192626ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:23:24Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:23:24.816715  300613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:23:24.831975  300613 pause.go:52] kubelet running: false
	I1101 09:23:24.832032  300613 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:23:25.004546  300613 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:23:25.004663  300613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:23:25.101998  300613 cri.go:89] found id: "681c90b1a234a7b0e12da1b109f0109f2d356035b96075b92f31b7a62e17be33"
	I1101 09:23:25.102028  300613 cri.go:89] found id: "3b82436ea2d5700e9c28432df7c0c8995bb11039b6647b93defe4dff2a8dee15"
	I1101 09:23:25.102033  300613 cri.go:89] found id: "21cef8656a25dce4efb27f988e8a3cc9dce09db0fc84534eef074135f376089e"
	I1101 09:23:25.102037  300613 cri.go:89] found id: "283fa2a3fb085218f6af5b72a8cb10747dc821167ca6211f52463b3ce9a3d074"
	I1101 09:23:25.102042  300613 cri.go:89] found id: "bcf6725cf8f1856ef02c34b30c6b1276953d6f898448506f89f239326f5a432f"
	I1101 09:23:25.102048  300613 cri.go:89] found id: "9f9ab169c19541b266c8fd479fd93eb23c6c19f093c940d8f847e81ae5f10c2a"
	I1101 09:23:25.102052  300613 cri.go:89] found id: "3a70839c32b1cecca0edb91a2d14108dcb786cb224d520bf5ad312290fa6eb4d"
	I1101 09:23:25.102055  300613 cri.go:89] found id: "d2082d0328a772edb9696b1f0f12f5a201f1f4a3026e4d11e8ca74c484edb87e"
	I1101 09:23:25.102059  300613 cri.go:89] found id: "d66f5b99588e058579c26aebe3a8b228526e364d4d10824def10bbd5d58fe3b1"
	I1101 09:23:25.102076  300613 cri.go:89] found id: "472b8c61b8d9de3cfc6073493b5e39efb4fa16b1d084894a85d78ac191365539"
	I1101 09:23:25.102080  300613 cri.go:89] found id: "fddb3c973428f71ea46a6abdb4fa01b2d9bf2ce8c6c1755b890ee66f7e28e5d6"
	I1101 09:23:25.102084  300613 cri.go:89] found id: ""
	I1101 09:23:25.102154  300613 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:23:25.116525  300613 retry.go:31] will retry after 261.100249ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:23:25Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:23:25.377777  300613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:23:25.394024  300613 pause.go:52] kubelet running: false
	I1101 09:23:25.394088  300613 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:23:25.581564  300613 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:23:25.581676  300613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:23:25.666235  300613 cri.go:89] found id: "681c90b1a234a7b0e12da1b109f0109f2d356035b96075b92f31b7a62e17be33"
	I1101 09:23:25.666267  300613 cri.go:89] found id: "3b82436ea2d5700e9c28432df7c0c8995bb11039b6647b93defe4dff2a8dee15"
	I1101 09:23:25.666274  300613 cri.go:89] found id: "21cef8656a25dce4efb27f988e8a3cc9dce09db0fc84534eef074135f376089e"
	I1101 09:23:25.666280  300613 cri.go:89] found id: "283fa2a3fb085218f6af5b72a8cb10747dc821167ca6211f52463b3ce9a3d074"
	I1101 09:23:25.666284  300613 cri.go:89] found id: "bcf6725cf8f1856ef02c34b30c6b1276953d6f898448506f89f239326f5a432f"
	I1101 09:23:25.666289  300613 cri.go:89] found id: "9f9ab169c19541b266c8fd479fd93eb23c6c19f093c940d8f847e81ae5f10c2a"
	I1101 09:23:25.666293  300613 cri.go:89] found id: "3a70839c32b1cecca0edb91a2d14108dcb786cb224d520bf5ad312290fa6eb4d"
	I1101 09:23:25.666298  300613 cri.go:89] found id: "d2082d0328a772edb9696b1f0f12f5a201f1f4a3026e4d11e8ca74c484edb87e"
	I1101 09:23:25.666302  300613 cri.go:89] found id: "d66f5b99588e058579c26aebe3a8b228526e364d4d10824def10bbd5d58fe3b1"
	I1101 09:23:25.666313  300613 cri.go:89] found id: "472b8c61b8d9de3cfc6073493b5e39efb4fa16b1d084894a85d78ac191365539"
	I1101 09:23:25.666316  300613 cri.go:89] found id: "fddb3c973428f71ea46a6abdb4fa01b2d9bf2ce8c6c1755b890ee66f7e28e5d6"
	I1101 09:23:25.666319  300613 cri.go:89] found id: ""
	I1101 09:23:25.666366  300613 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:23:25.683750  300613 out.go:203] 
	W1101 09:23:25.685034  300613 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:23:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:23:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:23:25.685060  300613 out.go:285] * 
	* 
	W1101 09:23:25.691603  300613 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:23:25.693053  300613 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-648641 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-648641
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-648641:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53",
	        "Created": "2025-11-01T09:21:16.622802953Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 285307,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:22:23.318697336Z",
	            "FinishedAt": "2025-11-01T09:22:22.278074274Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53/hostname",
	        "HostsPath": "/var/lib/docker/containers/57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53/hosts",
	        "LogPath": "/var/lib/docker/containers/57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53/57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53-json.log",
	        "Name": "/default-k8s-diff-port-648641",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-648641:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-648641",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53",
	                "LowerDir": "/var/lib/docker/overlay2/5e7c7f3822b950cf98e6234ac809850a021b136b26905d554019d5f32326262b-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e7c7f3822b950cf98e6234ac809850a021b136b26905d554019d5f32326262b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e7c7f3822b950cf98e6234ac809850a021b136b26905d554019d5f32326262b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e7c7f3822b950cf98e6234ac809850a021b136b26905d554019d5f32326262b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-648641",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-648641/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-648641",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-648641",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-648641",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7db4b9865fef70ae9eb390a6d220ee8e7de85a499a09d95a53fe7dbb71eea749",
	            "SandboxKey": "/var/run/docker/netns/7db4b9865fef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-648641": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:ea:7a:48:3d:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7a970666a21b0480d187f349d9b6ff5e5ba4999bec31b90faf658b9146692b6b",
	                    "EndpointID": "ddf17ccacf93b5c21ed241240a6f5f39df837cc9507d0b97c56abe5dadb81882",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-648641",
	                        "57e212cd292e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-648641 -n default-k8s-diff-port-648641
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-648641 -n default-k8s-diff-port-648641: exit status 2 (385.992005ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-648641 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-648641 logs -n 25: (3.151661351s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-204434 sudo crictl ps --all                                                                                                                             │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p auto-204434 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-204434                  │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p auto-204434 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-204434                  │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p auto-204434 sudo crio config                                                                                                                                    │ auto-204434                  │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p calico-204434 pgrep -a kubelet                                                                                                                                  │ calico-204434                │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                      │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ delete  │ -p auto-204434                                                                                                                                                     │ auto-204434                  │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo ip a s                                                                                                                                      │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo ip r s                                                                                                                                      │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo iptables-save                                                                                                                               │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo iptables -t nat -L -n -v                                                                                                                    │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo systemctl status kubelet --all --full --no-pager                                                                                            │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo systemctl cat kubelet --no-pager                                                                                                            │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                             │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo cat /etc/kubernetes/kubelet.conf                                                                                                            │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ start   │ -p custom-flannel-204434 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-204434        │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │                     │
	│ ssh     │ -p kindnet-204434 sudo cat /var/lib/kubelet/config.yaml                                                                                                            │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo systemctl status docker --all --full --no-pager                                                                                             │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │                     │
	│ pause   │ -p default-k8s-diff-port-648641 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-648641 │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │                     │
	│ ssh     │ -p kindnet-204434 sudo systemctl cat docker --no-pager                                                                                                             │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo cat /etc/docker/daemon.json                                                                                                                 │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │                     │
	│ ssh     │ -p kindnet-204434 sudo docker system info                                                                                                                          │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │                     │
	│ ssh     │ -p kindnet-204434 sudo systemctl status cri-docker --all --full --no-pager                                                                                         │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │                     │
	│ ssh     │ -p kindnet-204434 sudo systemctl cat cri-docker --no-pager                                                                                                         │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                    │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:23:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:23:23.170464  299985 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:23:23.170727  299985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:23:23.170736  299985 out.go:374] Setting ErrFile to fd 2...
	I1101 09:23:23.170749  299985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:23:23.170959  299985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:23:23.171449  299985 out.go:368] Setting JSON to false
	I1101 09:23:23.172708  299985 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3951,"bootTime":1761985052,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:23:23.172804  299985 start.go:143] virtualization: kvm guest
	I1101 09:23:23.175257  299985 out.go:179] * [custom-flannel-204434] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:23:23.176784  299985 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:23:23.176811  299985 notify.go:221] Checking for updates...
	I1101 09:23:23.179401  299985 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:23:23.180598  299985 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:23:23.181849  299985 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:23:23.183171  299985 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:23:23.184556  299985 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:23:23.186532  299985 config.go:182] Loaded profile config "calico-204434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:23:23.186652  299985 config.go:182] Loaded profile config "default-k8s-diff-port-648641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:23:23.186733  299985 config.go:182] Loaded profile config "kindnet-204434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:23:23.186848  299985 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:23:23.216607  299985 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:23:23.216828  299985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:23:23.280327  299985 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 09:23:23.270216845 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:23:23.280425  299985 docker.go:319] overlay module found
	I1101 09:23:23.281947  299985 out.go:179] * Using the docker driver based on user configuration
	I1101 09:23:23.283212  299985 start.go:309] selected driver: docker
	I1101 09:23:23.283231  299985 start.go:930] validating driver "docker" against <nil>
	I1101 09:23:23.283249  299985 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:23:23.284041  299985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:23:23.346736  299985 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 09:23:23.335859764 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:23:23.346932  299985 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:23:23.347162  299985 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:23:23.348648  299985 out.go:179] * Using Docker driver with root privileges
	I1101 09:23:23.349810  299985 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1101 09:23:23.349852  299985 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1101 09:23:23.349960  299985 start.go:353] cluster config:
	{Name:custom-flannel-204434 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-204434 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:23:23.351331  299985 out.go:179] * Starting "custom-flannel-204434" primary control-plane node in "custom-flannel-204434" cluster
	I1101 09:23:23.352594  299985 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:23:23.354046  299985 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:23:23.355444  299985 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:23:23.355492  299985 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:23:23.355525  299985 cache.go:59] Caching tarball of preloaded images
	I1101 09:23:23.355555  299985 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:23:23.355608  299985 preload.go:233] Found /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:23:23.355619  299985 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:23:23.355722  299985 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/custom-flannel-204434/config.json ...
	I1101 09:23:23.355760  299985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/custom-flannel-204434/config.json: {Name:mk3a0c312f231d186d97d032c3fddf54e1f4873c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:23:23.378882  299985 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:23:23.378907  299985 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:23:23.378929  299985 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:23:23.378967  299985 start.go:360] acquireMachinesLock for custom-flannel-204434: {Name:mk3406c2c8247d868011a38e097229190d6440d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:23:23.379093  299985 start.go:364] duration metric: took 105.911µs to acquireMachinesLock for "custom-flannel-204434"
	I1101 09:23:23.379125  299985 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-204434 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-204434 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:23:23.379243  299985 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 01 09:22:45 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:45.802310455Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:22:45 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:45.80618402Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:22:45 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:45.806235095Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:22:59 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:59.845805423Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=870a8415-ed52-4071-a701-07ccf2862148 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:22:59 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:59.849327546Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fb536794-86fa-49fb-b836-9ad6fc9a9343 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:22:59 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:59.853506978Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb/dashboard-metrics-scraper" id=f1b83110-e491-4231-ba15-842d1fda1cda name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:22:59 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:59.853654021Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:59 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:59.86219015Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:59 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:59.862849022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:59 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:59.893346408Z" level=info msg="Created container 472b8c61b8d9de3cfc6073493b5e39efb4fa16b1d084894a85d78ac191365539: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb/dashboard-metrics-scraper" id=f1b83110-e491-4231-ba15-842d1fda1cda name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:22:59 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:59.89412124Z" level=info msg="Starting container: 472b8c61b8d9de3cfc6073493b5e39efb4fa16b1d084894a85d78ac191365539" id=00f16eea-6f65-415b-bb47-6fa60f4e254e name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:22:59 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:59.896461037Z" level=info msg="Started container" PID=1737 containerID=472b8c61b8d9de3cfc6073493b5e39efb4fa16b1d084894a85d78ac191365539 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb/dashboard-metrics-scraper id=00f16eea-6f65-415b-bb47-6fa60f4e254e name=/runtime.v1.RuntimeService/StartContainer sandboxID=7bd2190561825efec7fdb2fbc6cf09118d40c42b963943e5fbabafe89d89973d
	Nov 01 09:23:00 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:00.058609294Z" level=info msg="Removing container: 9279f473364d230118c9edb825c69cdbdebc6a437e8e25b4ab6524c9a545664a" id=a6321766-eed8-4165-8cfe-954f5855a77a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:23:00 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:00.069196586Z" level=info msg="Removed container 9279f473364d230118c9edb825c69cdbdebc6a437e8e25b4ab6524c9a545664a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb/dashboard-metrics-scraper" id=a6321766-eed8-4165-8cfe-954f5855a77a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.079099058Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c0d54bc8-4af4-476c-b1e8-1fc1008d5d1d name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.080138388Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cf24be3a-1aa3-481a-9727-5b676c9620f8 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.081183756Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ac85eb27-0f30-40e4-9407-67948fea989c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.081322406Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.086538677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.086771984Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/98cf15d41269a3241406cd5a079fd47100a96fc8c7e5ca9a023189a78766fda2/merged/etc/passwd: no such file or directory"
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.086811279Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/98cf15d41269a3241406cd5a079fd47100a96fc8c7e5ca9a023189a78766fda2/merged/etc/group: no such file or directory"
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.087139886Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.115822803Z" level=info msg="Created container 681c90b1a234a7b0e12da1b109f0109f2d356035b96075b92f31b7a62e17be33: kube-system/storage-provisioner/storage-provisioner" id=ac85eb27-0f30-40e4-9407-67948fea989c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.116524746Z" level=info msg="Starting container: 681c90b1a234a7b0e12da1b109f0109f2d356035b96075b92f31b7a62e17be33" id=43dc7a29-772d-4dac-a2a2-8907b5fcedcd name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.118477475Z" level=info msg="Started container" PID=1754 containerID=681c90b1a234a7b0e12da1b109f0109f2d356035b96075b92f31b7a62e17be33 description=kube-system/storage-provisioner/storage-provisioner id=43dc7a29-772d-4dac-a2a2-8907b5fcedcd name=/runtime.v1.RuntimeService/StartContainer sandboxID=b59708d97fd405195daaff5c96767d9364604de27f8f89cb12fe1ad54903be69
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	681c90b1a234a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   b59708d97fd40       storage-provisioner                                    kube-system
	472b8c61b8d9d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago      Exited              dashboard-metrics-scraper   2                   7bd2190561825       dashboard-metrics-scraper-6ffb444bf9-ls2kb             kubernetes-dashboard
	fddb3c973428f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   f0ffc11885fb4       kubernetes-dashboard-855c9754f9-9lh8h                  kubernetes-dashboard
	482ef8962fbc4       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   5132b9889d4bb       busybox                                                default
	3b82436ea2d57       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   256f44448f0b7       coredns-66bc5c9577-nwj2s                               kube-system
	21cef8656a25d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   b59708d97fd40       storage-provisioner                                    kube-system
	283fa2a3fb085       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   6d60ae8969dfa       kindnet-fr9cg                                          kube-system
	bcf6725cf8f18       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   c19b2d3e2ad83       kube-proxy-nwrt4                                       kube-system
	9f9ab169c1954       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   b41aaa79837c7       kube-apiserver-default-k8s-diff-port-648641            kube-system
	3a70839c32b1c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   852adc9a20e59       kube-controller-manager-default-k8s-diff-port-648641   kube-system
	d2082d0328a77       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   7aa2e730d7c31       etcd-default-k8s-diff-port-648641                      kube-system
	d66f5b99588e0       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   889c943581712       kube-scheduler-default-k8s-diff-port-648641            kube-system
	
	
	==> coredns [3b82436ea2d5700e9c28432df7c0c8995bb11039b6647b93defe4dff2a8dee15] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52211 - 15184 "HINFO IN 1022244669784587857.603842359072382761. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.126462874s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-648641
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-648641
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=default-k8s-diff-port-648641
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_21_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:21:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-648641
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:23:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:23:14 +0000   Sat, 01 Nov 2025 09:21:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:23:14 +0000   Sat, 01 Nov 2025 09:21:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:23:14 +0000   Sat, 01 Nov 2025 09:21:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:23:14 +0000   Sat, 01 Nov 2025 09:21:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-648641
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                62320ade-1784-4153-9303-00914bb09bcc
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-nwj2s                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-default-k8s-diff-port-648641                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-fr9cg                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-default-k8s-diff-port-648641             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-648641    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-nwrt4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-default-k8s-diff-port-648641             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ls2kb              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9lh8h                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node default-k8s-diff-port-648641 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node default-k8s-diff-port-648641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s               kubelet          Node default-k8s-diff-port-648641 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node default-k8s-diff-port-648641 event: Registered Node default-k8s-diff-port-648641 in Controller
	  Normal  NodeReady                94s                kubelet          Node default-k8s-diff-port-648641 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node default-k8s-diff-port-648641 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node default-k8s-diff-port-648641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node default-k8s-diff-port-648641 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node default-k8s-diff-port-648641 event: Registered Node default-k8s-diff-port-648641 in Controller
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.047726] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[Nov 1 08:38] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.215674] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.047939] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000023] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023915] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023867] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023880] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000033] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023884] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +4.031524] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +8.255123] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	
	
	==> etcd [d2082d0328a772edb9696b1f0f12f5a201f1f4a3026e4d11e8ca74c484edb87e] <==
	{"level":"warn","ts":"2025-11-01T09:22:33.001848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.011699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.043751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.057542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.066272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.077490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.086743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.097168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.106124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.116215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.131696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.144804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.154715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.165387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.179510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.189223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.198775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.217532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.230250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.236093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.244996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.258906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.267965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.281711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.353233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36402","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:23:28 up  1:05,  0 user,  load average: 4.76, 3.60, 2.10
	Linux default-k8s-diff-port-648641 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [283fa2a3fb085218f6af5b72a8cb10747dc821167ca6211f52463b3ce9a3d074] <==
	I1101 09:22:35.486247       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:22:35.486673       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1101 09:22:35.486948       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:22:35.487016       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:22:35.487040       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:22:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:22:35.785409       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:22:35.785453       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:22:35.785469       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:22:35.785628       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:22:36.085617       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:22:36.085650       1 metrics.go:72] Registering metrics
	I1101 09:22:36.085716       1 controller.go:711] "Syncing nftables rules"
	I1101 09:22:45.785964       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:22:45.786015       1 main.go:301] handling current node
	I1101 09:22:55.785405       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:22:55.785459       1 main.go:301] handling current node
	I1101 09:23:05.785970       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:23:05.786007       1 main.go:301] handling current node
	I1101 09:23:15.786093       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:23:15.786133       1 main.go:301] handling current node
	I1101 09:23:25.790237       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:23:25.790283       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9f9ab169c19541b266c8fd479fd93eb23c6c19f093c940d8f847e81ae5f10c2a] <==
	I1101 09:22:34.024904       1 policy_source.go:240] refreshing policies
	I1101 09:22:34.029898       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:22:34.030347       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:22:34.047337       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:22:34.047494       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:22:34.049064       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:22:34.049104       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:22:34.049115       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:22:34.049127       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:22:34.057876       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:22:34.060782       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:22:34.087460       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 09:22:34.104003       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:22:34.116120       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:22:34.479542       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:22:34.510518       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:22:34.531265       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:22:34.540978       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:22:34.551581       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:22:34.597510       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.139.153"}
	I1101 09:22:34.610846       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.10.132"}
	I1101 09:22:34.909941       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:22:37.678888       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:22:37.731351       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:22:37.779222       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3a70839c32b1cecca0edb91a2d14108dcb786cb224d520bf5ad312290fa6eb4d] <==
	I1101 09:22:37.311261       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:22:37.313595       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:22:37.318908       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:22:37.324269       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:22:37.324316       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:22:37.324333       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:22:37.324349       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:22:37.324359       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:22:37.324349       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:22:37.324371       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 09:22:37.324333       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:22:37.325263       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:22:37.325307       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:22:37.325650       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:22:37.327472       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:22:37.327697       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:22:37.332252       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:22:37.335592       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:22:37.335715       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:22:37.335804       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-648641"
	I1101 09:22:37.335902       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:22:37.336925       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:22:37.342969       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:22:37.348219       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:22:37.356559       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [bcf6725cf8f1856ef02c34b30c6b1276953d6f898448506f89f239326f5a432f] <==
	I1101 09:22:35.315282       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:22:35.393430       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:22:35.494486       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:22:35.494567       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1101 09:22:35.494676       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:22:35.518133       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:22:35.518224       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:22:35.524407       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:22:35.525019       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:22:35.525467       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:22:35.530515       1 config.go:200] "Starting service config controller"
	I1101 09:22:35.531007       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:22:35.530579       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:22:35.530602       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:22:35.531090       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:22:35.531071       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:22:35.530667       1 config.go:309] "Starting node config controller"
	I1101 09:22:35.531146       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:22:35.531154       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:22:35.631223       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:22:35.631231       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:22:35.631314       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d66f5b99588e058579c26aebe3a8b228526e364d4d10824def10bbd5d58fe3b1] <==
	I1101 09:22:32.451578       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:22:34.100185       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:22:34.100230       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:22:34.119300       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:22:34.119454       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:22:34.119535       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:22:34.119546       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:22:34.119573       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:22:34.119582       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:22:34.121722       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:22:34.121628       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:22:34.219626       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:22:34.219703       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:22:34.219955       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:22:37 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:37.966036     717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-ls2kb\" (UID: \"6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb"
	Nov 01 09:22:40 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:40.638691     717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 09:22:41 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:41.968511     717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb" podStartSLOduration=2.239987566 podStartE2EDuration="4.968484836s" podCreationTimestamp="2025-11-01 09:22:37 +0000 UTC" firstStartedPulling="2025-11-01 09:22:38.183944273 +0000 UTC m=+7.463741503" lastFinishedPulling="2025-11-01 09:22:40.912441538 +0000 UTC m=+10.192238773" observedRunningTime="2025-11-01 09:22:40.994964403 +0000 UTC m=+10.274761640" watchObservedRunningTime="2025-11-01 09:22:41.968484836 +0000 UTC m=+11.248282074"
	Nov 01 09:22:41 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:41.997881     717 scope.go:117] "RemoveContainer" containerID="8d8eaa09772204fc818e9a8de4b9f5786ca37d199a1d24dd7e18dc558b2dd215"
	Nov 01 09:22:43 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:43.005681     717 scope.go:117] "RemoveContainer" containerID="8d8eaa09772204fc818e9a8de4b9f5786ca37d199a1d24dd7e18dc558b2dd215"
	Nov 01 09:22:43 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:43.006315     717 scope.go:117] "RemoveContainer" containerID="9279f473364d230118c9edb825c69cdbdebc6a437e8e25b4ab6524c9a545664a"
	Nov 01 09:22:43 default-k8s-diff-port-648641 kubelet[717]: E1101 09:22:43.006617     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2kb_kubernetes-dashboard(6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb" podUID="6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924"
	Nov 01 09:22:44 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:44.010072     717 scope.go:117] "RemoveContainer" containerID="9279f473364d230118c9edb825c69cdbdebc6a437e8e25b4ab6524c9a545664a"
	Nov 01 09:22:44 default-k8s-diff-port-648641 kubelet[717]: E1101 09:22:44.010279     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2kb_kubernetes-dashboard(6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb" podUID="6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924"
	Nov 01 09:22:46 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:46.029367     717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9lh8h" podStartSLOduration=2.19113033 podStartE2EDuration="9.029343894s" podCreationTimestamp="2025-11-01 09:22:37 +0000 UTC" firstStartedPulling="2025-11-01 09:22:38.18462861 +0000 UTC m=+7.464425840" lastFinishedPulling="2025-11-01 09:22:45.022842179 +0000 UTC m=+14.302639404" observedRunningTime="2025-11-01 09:22:46.029148194 +0000 UTC m=+15.308945432" watchObservedRunningTime="2025-11-01 09:22:46.029343894 +0000 UTC m=+15.309141134"
	Nov 01 09:22:47 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:47.834298     717 scope.go:117] "RemoveContainer" containerID="9279f473364d230118c9edb825c69cdbdebc6a437e8e25b4ab6524c9a545664a"
	Nov 01 09:22:47 default-k8s-diff-port-648641 kubelet[717]: E1101 09:22:47.834565     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2kb_kubernetes-dashboard(6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb" podUID="6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924"
	Nov 01 09:22:59 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:59.845158     717 scope.go:117] "RemoveContainer" containerID="9279f473364d230118c9edb825c69cdbdebc6a437e8e25b4ab6524c9a545664a"
	Nov 01 09:23:00 default-k8s-diff-port-648641 kubelet[717]: I1101 09:23:00.056753     717 scope.go:117] "RemoveContainer" containerID="9279f473364d230118c9edb825c69cdbdebc6a437e8e25b4ab6524c9a545664a"
	Nov 01 09:23:00 default-k8s-diff-port-648641 kubelet[717]: I1101 09:23:00.057033     717 scope.go:117] "RemoveContainer" containerID="472b8c61b8d9de3cfc6073493b5e39efb4fa16b1d084894a85d78ac191365539"
	Nov 01 09:23:00 default-k8s-diff-port-648641 kubelet[717]: E1101 09:23:00.057235     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2kb_kubernetes-dashboard(6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb" podUID="6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924"
	Nov 01 09:23:06 default-k8s-diff-port-648641 kubelet[717]: I1101 09:23:06.078614     717 scope.go:117] "RemoveContainer" containerID="21cef8656a25dce4efb27f988e8a3cc9dce09db0fc84534eef074135f376089e"
	Nov 01 09:23:07 default-k8s-diff-port-648641 kubelet[717]: I1101 09:23:07.835193     717 scope.go:117] "RemoveContainer" containerID="472b8c61b8d9de3cfc6073493b5e39efb4fa16b1d084894a85d78ac191365539"
	Nov 01 09:23:07 default-k8s-diff-port-648641 kubelet[717]: E1101 09:23:07.835371     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2kb_kubernetes-dashboard(6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb" podUID="6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924"
	Nov 01 09:23:18 default-k8s-diff-port-648641 kubelet[717]: I1101 09:23:18.845805     717 scope.go:117] "RemoveContainer" containerID="472b8c61b8d9de3cfc6073493b5e39efb4fa16b1d084894a85d78ac191365539"
	Nov 01 09:23:18 default-k8s-diff-port-648641 kubelet[717]: E1101 09:23:18.846140     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2kb_kubernetes-dashboard(6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb" podUID="6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924"
	Nov 01 09:23:24 default-k8s-diff-port-648641 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:23:24 default-k8s-diff-port-648641 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:23:24 default-k8s-diff-port-648641 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:23:24 default-k8s-diff-port-648641 systemd[1]: kubelet.service: Consumed 1.911s CPU time.
	
	
	==> kubernetes-dashboard [fddb3c973428f71ea46a6abdb4fa01b2d9bf2ce8c6c1755b890ee66f7e28e5d6] <==
	2025/11/01 09:22:45 Starting overwatch
	2025/11/01 09:22:45 Using namespace: kubernetes-dashboard
	2025/11/01 09:22:45 Using in-cluster config to connect to apiserver
	2025/11/01 09:22:45 Using secret token for csrf signing
	2025/11/01 09:22:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:22:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:22:45 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:22:45 Generating JWE encryption key
	2025/11/01 09:22:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:22:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:22:45 Initializing JWE encryption key from synchronized object
	2025/11/01 09:22:45 Creating in-cluster Sidecar client
	2025/11/01 09:22:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:22:45 Serving insecurely on HTTP port: 9090
	2025/11/01 09:23:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [21cef8656a25dce4efb27f988e8a3cc9dce09db0fc84534eef074135f376089e] <==
	I1101 09:22:35.276611       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:23:05.281303       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [681c90b1a234a7b0e12da1b109f0109f2d356035b96075b92f31b7a62e17be33] <==
	I1101 09:23:06.130803       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:23:06.138453       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:23:06.138508       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:23:06.141090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:09.596850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:13.857248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:17.457385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:20.511770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:23.534713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:23.541280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:23:23.541496       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:23:23.541613       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a1a9b3ee-ad3e-47d4-8f38-298304c860b4", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-648641_94d78cd6-9e87-4b9f-9bde-69613bcb9682 became leader
	I1101 09:23:23.541772       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-648641_94d78cd6-9e87-4b9f-9bde-69613bcb9682!
	W1101 09:23:23.545475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:23.553041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:23:23.642800       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-648641_94d78cd6-9e87-4b9f-9bde-69613bcb9682!
	W1101 09:23:25.557052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:25.561771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:27.565515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:27.652077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-648641 -n default-k8s-diff-port-648641
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-648641 -n default-k8s-diff-port-648641: exit status 2 (424.138007ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-648641 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-648641
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-648641:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53",
	        "Created": "2025-11-01T09:21:16.622802953Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 285307,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:22:23.318697336Z",
	            "FinishedAt": "2025-11-01T09:22:22.278074274Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53/hostname",
	        "HostsPath": "/var/lib/docker/containers/57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53/hosts",
	        "LogPath": "/var/lib/docker/containers/57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53/57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53-json.log",
	        "Name": "/default-k8s-diff-port-648641",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-648641:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-648641",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "57e212cd292edfd327da04bf29f32b5b288abd48bc5ceb3af8d29be761bfdf53",
	                "LowerDir": "/var/lib/docker/overlay2/5e7c7f3822b950cf98e6234ac809850a021b136b26905d554019d5f32326262b-init/diff:/var/lib/docker/overlay2/95b4043403bc6c2b0722fad0bb31817195e1e004282cc78b772e7c8c9f9def2d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5e7c7f3822b950cf98e6234ac809850a021b136b26905d554019d5f32326262b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5e7c7f3822b950cf98e6234ac809850a021b136b26905d554019d5f32326262b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5e7c7f3822b950cf98e6234ac809850a021b136b26905d554019d5f32326262b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-648641",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-648641/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-648641",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-648641",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-648641",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7db4b9865fef70ae9eb390a6d220ee8e7de85a499a09d95a53fe7dbb71eea749",
	            "SandboxKey": "/var/run/docker/netns/7db4b9865fef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-648641": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:ea:7a:48:3d:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7a970666a21b0480d187f349d9b6ff5e5ba4999bec31b90faf658b9146692b6b",
	                    "EndpointID": "ddf17ccacf93b5c21ed241240a6f5f39df837cc9507d0b97c56abe5dadb81882",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-648641",
	                        "57e212cd292e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-648641 -n default-k8s-diff-port-648641
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-648641 -n default-k8s-diff-port-648641: exit status 2 (397.435566ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-648641 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-648641 logs -n 25: (1.250399714s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-204434 sudo systemctl status kubelet --all --full --no-pager                                                                                            │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo systemctl cat kubelet --no-pager                                                                                                            │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                             │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo cat /etc/kubernetes/kubelet.conf                                                                                                            │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ start   │ -p custom-flannel-204434 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-204434        │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │                     │
	│ ssh     │ -p kindnet-204434 sudo cat /var/lib/kubelet/config.yaml                                                                                                            │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo systemctl status docker --all --full --no-pager                                                                                             │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │                     │
	│ pause   │ -p default-k8s-diff-port-648641 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-648641 │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │                     │
	│ ssh     │ -p kindnet-204434 sudo systemctl cat docker --no-pager                                                                                                             │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo cat /etc/docker/daemon.json                                                                                                                 │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │                     │
	│ ssh     │ -p kindnet-204434 sudo docker system info                                                                                                                          │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │                     │
	│ ssh     │ -p kindnet-204434 sudo systemctl status cri-docker --all --full --no-pager                                                                                         │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │                     │
	│ ssh     │ -p kindnet-204434 sudo systemctl cat cri-docker --no-pager                                                                                                         │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                    │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │                     │
	│ ssh     │ -p kindnet-204434 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                              │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo cri-dockerd --version                                                                                                                       │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo systemctl status containerd --all --full --no-pager                                                                                         │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │                     │
	│ ssh     │ -p kindnet-204434 sudo systemctl cat containerd --no-pager                                                                                                         │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo cat /lib/systemd/system/containerd.service                                                                                                  │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo cat /etc/containerd/config.toml                                                                                                             │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo containerd config dump                                                                                                                      │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo systemctl status crio --all --full --no-pager                                                                                               │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo systemctl cat crio --no-pager                                                                                                               │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                     │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	│ ssh     │ -p kindnet-204434 sudo crio config                                                                                                                                 │ kindnet-204434               │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:23:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:23:23.170464  299985 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:23:23.170727  299985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:23:23.170736  299985 out.go:374] Setting ErrFile to fd 2...
	I1101 09:23:23.170749  299985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:23:23.170959  299985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:23:23.171449  299985 out.go:368] Setting JSON to false
	I1101 09:23:23.172708  299985 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3951,"bootTime":1761985052,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:23:23.172804  299985 start.go:143] virtualization: kvm guest
	I1101 09:23:23.175257  299985 out.go:179] * [custom-flannel-204434] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:23:23.176784  299985 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:23:23.176811  299985 notify.go:221] Checking for updates...
	I1101 09:23:23.179401  299985 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:23:23.180598  299985 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:23:23.181849  299985 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:23:23.183171  299985 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:23:23.184556  299985 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:23:23.186532  299985 config.go:182] Loaded profile config "calico-204434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:23:23.186652  299985 config.go:182] Loaded profile config "default-k8s-diff-port-648641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:23:23.186733  299985 config.go:182] Loaded profile config "kindnet-204434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:23:23.186848  299985 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:23:23.216607  299985 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:23:23.216828  299985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:23:23.280327  299985 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 09:23:23.270216845 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:23:23.280425  299985 docker.go:319] overlay module found
	I1101 09:23:23.281947  299985 out.go:179] * Using the docker driver based on user configuration
	I1101 09:23:23.283212  299985 start.go:309] selected driver: docker
	I1101 09:23:23.283231  299985 start.go:930] validating driver "docker" against <nil>
	I1101 09:23:23.283249  299985 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:23:23.284041  299985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:23:23.346736  299985 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 09:23:23.335859764 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:23:23.346932  299985 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:23:23.347162  299985 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:23:23.348648  299985 out.go:179] * Using Docker driver with root privileges
	I1101 09:23:23.349810  299985 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1101 09:23:23.349852  299985 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1101 09:23:23.349960  299985 start.go:353] cluster config:
	{Name:custom-flannel-204434 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-204434 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:23:23.351331  299985 out.go:179] * Starting "custom-flannel-204434" primary control-plane node in "custom-flannel-204434" cluster
	I1101 09:23:23.352594  299985 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:23:23.354046  299985 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:23:23.355444  299985 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:23:23.355492  299985 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:23:23.355525  299985 cache.go:59] Caching tarball of preloaded images
	I1101 09:23:23.355555  299985 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:23:23.355608  299985 preload.go:233] Found /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:23:23.355619  299985 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:23:23.355722  299985 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/custom-flannel-204434/config.json ...
	I1101 09:23:23.355760  299985 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/custom-flannel-204434/config.json: {Name:mk3a0c312f231d186d97d032c3fddf54e1f4873c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:23:23.378882  299985 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:23:23.378907  299985 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:23:23.378929  299985 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:23:23.378967  299985 start.go:360] acquireMachinesLock for custom-flannel-204434: {Name:mk3406c2c8247d868011a38e097229190d6440d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:23:23.379093  299985 start.go:364] duration metric: took 105.911µs to acquireMachinesLock for "custom-flannel-204434"
	I1101 09:23:23.379125  299985 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-204434 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-204434 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:23:23.379243  299985 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:23:23.381405  299985 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:23:23.381653  299985 start.go:159] libmachine.API.Create for "custom-flannel-204434" (driver="docker")
	I1101 09:23:23.381690  299985 client.go:173] LocalClient.Create starting
	I1101 09:23:23.381748  299985 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-5913/.minikube/certs/ca.pem
	I1101 09:23:23.381788  299985 main.go:143] libmachine: Decoding PEM data...
	I1101 09:23:23.381809  299985 main.go:143] libmachine: Parsing certificate...
	I1101 09:23:23.381917  299985 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-5913/.minikube/certs/cert.pem
	I1101 09:23:23.381961  299985 main.go:143] libmachine: Decoding PEM data...
	I1101 09:23:23.381977  299985 main.go:143] libmachine: Parsing certificate...
	I1101 09:23:23.382343  299985 cli_runner.go:164] Run: docker network inspect custom-flannel-204434 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:23:23.402214  299985 cli_runner.go:211] docker network inspect custom-flannel-204434 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:23:23.402323  299985 network_create.go:284] running [docker network inspect custom-flannel-204434] to gather additional debugging logs...
	I1101 09:23:23.402348  299985 cli_runner.go:164] Run: docker network inspect custom-flannel-204434
	W1101 09:23:23.422624  299985 cli_runner.go:211] docker network inspect custom-flannel-204434 returned with exit code 1
	I1101 09:23:23.422662  299985 network_create.go:287] error running [docker network inspect custom-flannel-204434]: docker network inspect custom-flannel-204434: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-204434 not found
	I1101 09:23:23.422679  299985 network_create.go:289] output of [docker network inspect custom-flannel-204434]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-204434 not found
	
	** /stderr **
	I1101 09:23:23.422889  299985 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:23:23.449805  299985 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5f44df6b5a5b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:38:92:20:b3:ae} reservation:<nil>}
	I1101 09:23:23.451125  299985 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ec772021a1d5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:14:7e:99:b1:e5} reservation:<nil>}
	I1101 09:23:23.452297  299985 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6ef14c0d2e1a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:5b:36:d5:85:2b} reservation:<nil>}
	I1101 09:23:23.453694  299985 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec10e0}
	I1101 09:23:23.453733  299985 network_create.go:124] attempt to create docker network custom-flannel-204434 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 09:23:23.453795  299985 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-204434 custom-flannel-204434
	I1101 09:23:23.530328  299985 network_create.go:108] docker network custom-flannel-204434 192.168.76.0/24 created
	I1101 09:23:23.530369  299985 kic.go:121] calculated static IP "192.168.76.2" for the "custom-flannel-204434" container
	I1101 09:23:23.530460  299985 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:23:23.556780  299985 cli_runner.go:164] Run: docker volume create custom-flannel-204434 --label name.minikube.sigs.k8s.io=custom-flannel-204434 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:23:23.579244  299985 oci.go:103] Successfully created a docker volume custom-flannel-204434
	I1101 09:23:23.579312  299985 cli_runner.go:164] Run: docker run --rm --name custom-flannel-204434-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-204434 --entrypoint /usr/bin/test -v custom-flannel-204434:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:23:24.039358  299985 oci.go:107] Successfully prepared a docker volume custom-flannel-204434
	I1101 09:23:24.039929  299985 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:23:24.039978  299985 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:23:24.040094  299985 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-204434:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 01 09:22:45 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:45.802310455Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:22:45 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:45.80618402Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:22:45 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:45.806235095Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:22:59 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:59.845805423Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=870a8415-ed52-4071-a701-07ccf2862148 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:22:59 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:59.849327546Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fb536794-86fa-49fb-b836-9ad6fc9a9343 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:22:59 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:59.853506978Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb/dashboard-metrics-scraper" id=f1b83110-e491-4231-ba15-842d1fda1cda name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:22:59 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:59.853654021Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:59 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:59.86219015Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:59 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:59.862849022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:22:59 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:59.893346408Z" level=info msg="Created container 472b8c61b8d9de3cfc6073493b5e39efb4fa16b1d084894a85d78ac191365539: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb/dashboard-metrics-scraper" id=f1b83110-e491-4231-ba15-842d1fda1cda name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:22:59 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:59.89412124Z" level=info msg="Starting container: 472b8c61b8d9de3cfc6073493b5e39efb4fa16b1d084894a85d78ac191365539" id=00f16eea-6f65-415b-bb47-6fa60f4e254e name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:22:59 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:22:59.896461037Z" level=info msg="Started container" PID=1737 containerID=472b8c61b8d9de3cfc6073493b5e39efb4fa16b1d084894a85d78ac191365539 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb/dashboard-metrics-scraper id=00f16eea-6f65-415b-bb47-6fa60f4e254e name=/runtime.v1.RuntimeService/StartContainer sandboxID=7bd2190561825efec7fdb2fbc6cf09118d40c42b963943e5fbabafe89d89973d
	Nov 01 09:23:00 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:00.058609294Z" level=info msg="Removing container: 9279f473364d230118c9edb825c69cdbdebc6a437e8e25b4ab6524c9a545664a" id=a6321766-eed8-4165-8cfe-954f5855a77a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:23:00 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:00.069196586Z" level=info msg="Removed container 9279f473364d230118c9edb825c69cdbdebc6a437e8e25b4ab6524c9a545664a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb/dashboard-metrics-scraper" id=a6321766-eed8-4165-8cfe-954f5855a77a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.079099058Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c0d54bc8-4af4-476c-b1e8-1fc1008d5d1d name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.080138388Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cf24be3a-1aa3-481a-9727-5b676c9620f8 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.081183756Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ac85eb27-0f30-40e4-9407-67948fea989c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.081322406Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.086538677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.086771984Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/98cf15d41269a3241406cd5a079fd47100a96fc8c7e5ca9a023189a78766fda2/merged/etc/passwd: no such file or directory"
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.086811279Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/98cf15d41269a3241406cd5a079fd47100a96fc8c7e5ca9a023189a78766fda2/merged/etc/group: no such file or directory"
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.087139886Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.115822803Z" level=info msg="Created container 681c90b1a234a7b0e12da1b109f0109f2d356035b96075b92f31b7a62e17be33: kube-system/storage-provisioner/storage-provisioner" id=ac85eb27-0f30-40e4-9407-67948fea989c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.116524746Z" level=info msg="Starting container: 681c90b1a234a7b0e12da1b109f0109f2d356035b96075b92f31b7a62e17be33" id=43dc7a29-772d-4dac-a2a2-8907b5fcedcd name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:23:06 default-k8s-diff-port-648641 crio[562]: time="2025-11-01T09:23:06.118477475Z" level=info msg="Started container" PID=1754 containerID=681c90b1a234a7b0e12da1b109f0109f2d356035b96075b92f31b7a62e17be33 description=kube-system/storage-provisioner/storage-provisioner id=43dc7a29-772d-4dac-a2a2-8907b5fcedcd name=/runtime.v1.RuntimeService/StartContainer sandboxID=b59708d97fd405195daaff5c96767d9364604de27f8f89cb12fe1ad54903be69
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	681c90b1a234a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   b59708d97fd40       storage-provisioner                                    kube-system
	472b8c61b8d9d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago      Exited              dashboard-metrics-scraper   2                   7bd2190561825       dashboard-metrics-scraper-6ffb444bf9-ls2kb             kubernetes-dashboard
	fddb3c973428f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   f0ffc11885fb4       kubernetes-dashboard-855c9754f9-9lh8h                  kubernetes-dashboard
	482ef8962fbc4       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   5132b9889d4bb       busybox                                                default
	3b82436ea2d57       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   256f44448f0b7       coredns-66bc5c9577-nwj2s                               kube-system
	21cef8656a25d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   b59708d97fd40       storage-provisioner                                    kube-system
	283fa2a3fb085       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   6d60ae8969dfa       kindnet-fr9cg                                          kube-system
	bcf6725cf8f18       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   c19b2d3e2ad83       kube-proxy-nwrt4                                       kube-system
	9f9ab169c1954       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   b41aaa79837c7       kube-apiserver-default-k8s-diff-port-648641            kube-system
	3a70839c32b1c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   852adc9a20e59       kube-controller-manager-default-k8s-diff-port-648641   kube-system
	d2082d0328a77       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   7aa2e730d7c31       etcd-default-k8s-diff-port-648641                      kube-system
	d66f5b99588e0       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   889c943581712       kube-scheduler-default-k8s-diff-port-648641            kube-system
	
	
	==> coredns [3b82436ea2d5700e9c28432df7c0c8995bb11039b6647b93defe4dff2a8dee15] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52211 - 15184 "HINFO IN 1022244669784587857.603842359072382761. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.126462874s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-648641
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-648641
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=default-k8s-diff-port-648641
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_21_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:21:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-648641
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:23:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:23:14 +0000   Sat, 01 Nov 2025 09:21:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:23:14 +0000   Sat, 01 Nov 2025 09:21:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:23:14 +0000   Sat, 01 Nov 2025 09:21:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:23:14 +0000   Sat, 01 Nov 2025 09:21:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-648641
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                62320ade-1784-4153-9303-00914bb09bcc
	  Boot ID:                    87f40279-f2b1-40f3-beea-00aea5942dd1
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-nwj2s                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-default-k8s-diff-port-648641                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-fr9cg                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-default-k8s-diff-port-648641             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-648641    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-nwrt4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-default-k8s-diff-port-648641             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ls2kb              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9lh8h                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node default-k8s-diff-port-648641 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node default-k8s-diff-port-648641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node default-k8s-diff-port-648641 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node default-k8s-diff-port-648641 event: Registered Node default-k8s-diff-port-648641 in Controller
	  Normal  NodeReady                98s                kubelet          Node default-k8s-diff-port-648641 status is now: NodeReady
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node default-k8s-diff-port-648641 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node default-k8s-diff-port-648641 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node default-k8s-diff-port-648641 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node default-k8s-diff-port-648641 event: Registered Node default-k8s-diff-port-648641 in Controller
	
	
	==> dmesg <==
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.047726] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[Nov 1 08:38] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000025] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +2.215674] IPv4: martian source 10.105.176.244 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.047939] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000023] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023915] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000021] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023867] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023880] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000033] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023884] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000029] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +1.023851] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000028] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +4.031524] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	[  +8.255123] IPv4: martian source 10.244.0.3 from 192.168.49.2, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 94 8e 4a 1a 1e 72 e7 eb 50 9b c7 08 00
	
	
	==> etcd [d2082d0328a772edb9696b1f0f12f5a201f1f4a3026e4d11e8ca74c484edb87e] <==
	{"level":"warn","ts":"2025-11-01T09:22:33.001848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.011699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.043751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.057542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.066272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.077490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.086743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.097168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.106124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.116215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.131696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.144804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.154715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.165387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.179510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.189223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.198775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.217532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.230250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.236093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.244996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.258906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.267965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.281711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:22:33.353233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36402","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:23:31 up  1:05,  0 user,  load average: 4.76, 3.60, 2.10
	Linux default-k8s-diff-port-648641 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [283fa2a3fb085218f6af5b72a8cb10747dc821167ca6211f52463b3ce9a3d074] <==
	I1101 09:22:35.486247       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:22:35.486673       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1101 09:22:35.486948       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:22:35.487016       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:22:35.487040       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:22:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:22:35.785409       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:22:35.785453       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:22:35.785469       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:22:35.785628       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:22:36.085617       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:22:36.085650       1 metrics.go:72] Registering metrics
	I1101 09:22:36.085716       1 controller.go:711] "Syncing nftables rules"
	I1101 09:22:45.785964       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:22:45.786015       1 main.go:301] handling current node
	I1101 09:22:55.785405       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:22:55.785459       1 main.go:301] handling current node
	I1101 09:23:05.785970       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:23:05.786007       1 main.go:301] handling current node
	I1101 09:23:15.786093       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:23:15.786133       1 main.go:301] handling current node
	I1101 09:23:25.790237       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:23:25.790283       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9f9ab169c19541b266c8fd479fd93eb23c6c19f093c940d8f847e81ae5f10c2a] <==
	I1101 09:22:34.024904       1 policy_source.go:240] refreshing policies
	I1101 09:22:34.029898       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:22:34.030347       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:22:34.047337       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:22:34.047494       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:22:34.049064       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:22:34.049104       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:22:34.049115       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:22:34.049127       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:22:34.057876       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:22:34.060782       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:22:34.087460       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 09:22:34.104003       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:22:34.116120       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:22:34.479542       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:22:34.510518       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:22:34.531265       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:22:34.540978       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:22:34.551581       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:22:34.597510       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.139.153"}
	I1101 09:22:34.610846       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.10.132"}
	I1101 09:22:34.909941       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:22:37.678888       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:22:37.731351       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:22:37.779222       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3a70839c32b1cecca0edb91a2d14108dcb786cb224d520bf5ad312290fa6eb4d] <==
	I1101 09:22:37.311261       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:22:37.313595       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:22:37.318908       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:22:37.324269       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:22:37.324316       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:22:37.324333       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:22:37.324349       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:22:37.324359       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:22:37.324349       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:22:37.324371       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 09:22:37.324333       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:22:37.325263       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:22:37.325307       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:22:37.325650       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:22:37.327472       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:22:37.327697       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:22:37.332252       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:22:37.335592       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:22:37.335715       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:22:37.335804       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-648641"
	I1101 09:22:37.335902       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:22:37.336925       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:22:37.342969       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:22:37.348219       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:22:37.356559       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [bcf6725cf8f1856ef02c34b30c6b1276953d6f898448506f89f239326f5a432f] <==
	I1101 09:22:35.315282       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:22:35.393430       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:22:35.494486       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:22:35.494567       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1101 09:22:35.494676       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:22:35.518133       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:22:35.518224       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:22:35.524407       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:22:35.525019       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:22:35.525467       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:22:35.530515       1 config.go:200] "Starting service config controller"
	I1101 09:22:35.531007       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:22:35.530579       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:22:35.530602       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:22:35.531090       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:22:35.531071       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:22:35.530667       1 config.go:309] "Starting node config controller"
	I1101 09:22:35.531146       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:22:35.531154       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:22:35.631223       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:22:35.631231       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:22:35.631314       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d66f5b99588e058579c26aebe3a8b228526e364d4d10824def10bbd5d58fe3b1] <==
	I1101 09:22:32.451578       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:22:34.100185       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:22:34.100230       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:22:34.119300       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:22:34.119454       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:22:34.119535       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:22:34.119546       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:22:34.119573       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:22:34.119582       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:22:34.121722       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:22:34.121628       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:22:34.219626       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:22:34.219703       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:22:34.219955       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:22:37 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:37.966036     717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-ls2kb\" (UID: \"6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb"
	Nov 01 09:22:40 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:40.638691     717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 09:22:41 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:41.968511     717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb" podStartSLOduration=2.239987566 podStartE2EDuration="4.968484836s" podCreationTimestamp="2025-11-01 09:22:37 +0000 UTC" firstStartedPulling="2025-11-01 09:22:38.183944273 +0000 UTC m=+7.463741503" lastFinishedPulling="2025-11-01 09:22:40.912441538 +0000 UTC m=+10.192238773" observedRunningTime="2025-11-01 09:22:40.994964403 +0000 UTC m=+10.274761640" watchObservedRunningTime="2025-11-01 09:22:41.968484836 +0000 UTC m=+11.248282074"
	Nov 01 09:22:41 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:41.997881     717 scope.go:117] "RemoveContainer" containerID="8d8eaa09772204fc818e9a8de4b9f5786ca37d199a1d24dd7e18dc558b2dd215"
	Nov 01 09:22:43 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:43.005681     717 scope.go:117] "RemoveContainer" containerID="8d8eaa09772204fc818e9a8de4b9f5786ca37d199a1d24dd7e18dc558b2dd215"
	Nov 01 09:22:43 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:43.006315     717 scope.go:117] "RemoveContainer" containerID="9279f473364d230118c9edb825c69cdbdebc6a437e8e25b4ab6524c9a545664a"
	Nov 01 09:22:43 default-k8s-diff-port-648641 kubelet[717]: E1101 09:22:43.006617     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2kb_kubernetes-dashboard(6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb" podUID="6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924"
	Nov 01 09:22:44 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:44.010072     717 scope.go:117] "RemoveContainer" containerID="9279f473364d230118c9edb825c69cdbdebc6a437e8e25b4ab6524c9a545664a"
	Nov 01 09:22:44 default-k8s-diff-port-648641 kubelet[717]: E1101 09:22:44.010279     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2kb_kubernetes-dashboard(6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb" podUID="6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924"
	Nov 01 09:22:46 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:46.029367     717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9lh8h" podStartSLOduration=2.19113033 podStartE2EDuration="9.029343894s" podCreationTimestamp="2025-11-01 09:22:37 +0000 UTC" firstStartedPulling="2025-11-01 09:22:38.18462861 +0000 UTC m=+7.464425840" lastFinishedPulling="2025-11-01 09:22:45.022842179 +0000 UTC m=+14.302639404" observedRunningTime="2025-11-01 09:22:46.029148194 +0000 UTC m=+15.308945432" watchObservedRunningTime="2025-11-01 09:22:46.029343894 +0000 UTC m=+15.309141134"
	Nov 01 09:22:47 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:47.834298     717 scope.go:117] "RemoveContainer" containerID="9279f473364d230118c9edb825c69cdbdebc6a437e8e25b4ab6524c9a545664a"
	Nov 01 09:22:47 default-k8s-diff-port-648641 kubelet[717]: E1101 09:22:47.834565     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2kb_kubernetes-dashboard(6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb" podUID="6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924"
	Nov 01 09:22:59 default-k8s-diff-port-648641 kubelet[717]: I1101 09:22:59.845158     717 scope.go:117] "RemoveContainer" containerID="9279f473364d230118c9edb825c69cdbdebc6a437e8e25b4ab6524c9a545664a"
	Nov 01 09:23:00 default-k8s-diff-port-648641 kubelet[717]: I1101 09:23:00.056753     717 scope.go:117] "RemoveContainer" containerID="9279f473364d230118c9edb825c69cdbdebc6a437e8e25b4ab6524c9a545664a"
	Nov 01 09:23:00 default-k8s-diff-port-648641 kubelet[717]: I1101 09:23:00.057033     717 scope.go:117] "RemoveContainer" containerID="472b8c61b8d9de3cfc6073493b5e39efb4fa16b1d084894a85d78ac191365539"
	Nov 01 09:23:00 default-k8s-diff-port-648641 kubelet[717]: E1101 09:23:00.057235     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2kb_kubernetes-dashboard(6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb" podUID="6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924"
	Nov 01 09:23:06 default-k8s-diff-port-648641 kubelet[717]: I1101 09:23:06.078614     717 scope.go:117] "RemoveContainer" containerID="21cef8656a25dce4efb27f988e8a3cc9dce09db0fc84534eef074135f376089e"
	Nov 01 09:23:07 default-k8s-diff-port-648641 kubelet[717]: I1101 09:23:07.835193     717 scope.go:117] "RemoveContainer" containerID="472b8c61b8d9de3cfc6073493b5e39efb4fa16b1d084894a85d78ac191365539"
	Nov 01 09:23:07 default-k8s-diff-port-648641 kubelet[717]: E1101 09:23:07.835371     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2kb_kubernetes-dashboard(6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb" podUID="6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924"
	Nov 01 09:23:18 default-k8s-diff-port-648641 kubelet[717]: I1101 09:23:18.845805     717 scope.go:117] "RemoveContainer" containerID="472b8c61b8d9de3cfc6073493b5e39efb4fa16b1d084894a85d78ac191365539"
	Nov 01 09:23:18 default-k8s-diff-port-648641 kubelet[717]: E1101 09:23:18.846140     717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ls2kb_kubernetes-dashboard(6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ls2kb" podUID="6c1fe8d3-c5e4-4881-bb70-2b1f6dbc3924"
	Nov 01 09:23:24 default-k8s-diff-port-648641 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:23:24 default-k8s-diff-port-648641 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:23:24 default-k8s-diff-port-648641 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:23:24 default-k8s-diff-port-648641 systemd[1]: kubelet.service: Consumed 1.911s CPU time.
	
	
	==> kubernetes-dashboard [fddb3c973428f71ea46a6abdb4fa01b2d9bf2ce8c6c1755b890ee66f7e28e5d6] <==
	2025/11/01 09:22:45 Starting overwatch
	2025/11/01 09:22:45 Using namespace: kubernetes-dashboard
	2025/11/01 09:22:45 Using in-cluster config to connect to apiserver
	2025/11/01 09:22:45 Using secret token for csrf signing
	2025/11/01 09:22:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:22:45 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:22:45 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:22:45 Generating JWE encryption key
	2025/11/01 09:22:45 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:22:45 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:22:45 Initializing JWE encryption key from synchronized object
	2025/11/01 09:22:45 Creating in-cluster Sidecar client
	2025/11/01 09:22:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:22:45 Serving insecurely on HTTP port: 9090
	2025/11/01 09:23:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [21cef8656a25dce4efb27f988e8a3cc9dce09db0fc84534eef074135f376089e] <==
	I1101 09:22:35.276611       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:23:05.281303       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [681c90b1a234a7b0e12da1b109f0109f2d356035b96075b92f31b7a62e17be33] <==
	I1101 09:23:06.130803       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:23:06.138453       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:23:06.138508       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:23:06.141090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:09.596850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:13.857248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:17.457385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:20.511770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:23.534713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:23.541280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:23:23.541496       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:23:23.541613       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a1a9b3ee-ad3e-47d4-8f38-298304c860b4", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-648641_94d78cd6-9e87-4b9f-9bde-69613bcb9682 became leader
	I1101 09:23:23.541772       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-648641_94d78cd6-9e87-4b9f-9bde-69613bcb9682!
	W1101 09:23:23.545475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:23.553041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:23:23.642800       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-648641_94d78cd6-9e87-4b9f-9bde-69613bcb9682!
	W1101 09:23:25.557052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:25.561771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:27.565515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:27.652077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:29.657731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:23:29.671222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-648641 -n default-k8s-diff-port-648641
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-648641 -n default-k8s-diff-port-648641: exit status 2 (340.46706ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-648641 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.91s)

                                                
                                    

Test pass (261/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.57
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 3.95
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.24
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 0.43
21 TestBinaryMirror 0.84
22 TestOffline 85.7
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 144.54
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 7.43
48 TestAddons/StoppedEnableDisable 16.65
49 TestCertOptions 30.41
50 TestCertExpiration 209.64
52 TestForceSystemdFlag 24.4
53 TestForceSystemdEnv 35.66
58 TestErrorSpam/setup 21.38
59 TestErrorSpam/start 0.69
60 TestErrorSpam/status 0.99
61 TestErrorSpam/pause 5.95
62 TestErrorSpam/unpause 5.59
63 TestErrorSpam/stop 8.19
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 37.96
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.32
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.77
75 TestFunctional/serial/CacheCmd/cache/add_local 1.17
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.58
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 60.79
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.28
86 TestFunctional/serial/LogsFileCmd 1.29
87 TestFunctional/serial/InvalidService 3.62
89 TestFunctional/parallel/ConfigCmd 0.46
90 TestFunctional/parallel/DashboardCmd 6.35
91 TestFunctional/parallel/DryRun 0.39
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 0.99
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 54.17
101 TestFunctional/parallel/SSHCmd 0.63
102 TestFunctional/parallel/CpCmd 1.95
103 TestFunctional/parallel/MySQL 15.88
104 TestFunctional/parallel/FileSync 0.3
105 TestFunctional/parallel/CertSync 1.83
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
113 TestFunctional/parallel/License 0.4
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.81
119 TestFunctional/parallel/ImageCommands/Setup 0.99
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.23
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 26.47
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
140 TestFunctional/parallel/ProfileCmd/profile_list 0.43
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
142 TestFunctional/parallel/MountCmd/any-port 5.97
143 TestFunctional/parallel/MountCmd/specific-port 2.09
144 TestFunctional/parallel/MountCmd/VerifyCleanup 1.73
145 TestFunctional/parallel/Version/short 0.06
146 TestFunctional/parallel/Version/components 0.49
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
150 TestFunctional/parallel/ServiceCmd/List 1.7
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.71
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 116.56
163 TestMultiControlPlane/serial/DeployApp 3.95
164 TestMultiControlPlane/serial/PingHostFromPods 1.07
165 TestMultiControlPlane/serial/AddWorkerNode 25.53
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.93
168 TestMultiControlPlane/serial/CopyFile 17.85
169 TestMultiControlPlane/serial/StopSecondaryNode 19.88
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.74
171 TestMultiControlPlane/serial/RestartSecondaryNode 14.5
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.93
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 196.81
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.67
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.72
176 TestMultiControlPlane/serial/StopCluster 32.75
177 TestMultiControlPlane/serial/RestartCluster 58.35
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.73
179 TestMultiControlPlane/serial/AddSecondaryNode 35.64
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
185 TestJSONOutput/start/Command 41.46
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.19
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 28.96
211 TestKicCustomNetwork/use_default_bridge_network 23.56
212 TestKicExistingNetwork 27.09
213 TestKicCustomSubnet 27.82
214 TestKicStaticIP 26.77
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 49.15
219 TestMountStart/serial/StartWithMountFirst 5.64
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 6.34
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.76
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.26
226 TestMountStart/serial/RestartStopped 7.45
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 69.87
231 TestMultiNode/serial/DeployApp2Nodes 3.63
232 TestMultiNode/serial/PingHostFrom2Pods 0.71
233 TestMultiNode/serial/AddNode 23.96
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.68
236 TestMultiNode/serial/CopyFile 9.93
237 TestMultiNode/serial/StopNode 2.29
238 TestMultiNode/serial/StartAfterStop 7.67
239 TestMultiNode/serial/RestartKeepsNodes 75.71
240 TestMultiNode/serial/DeleteNode 5.32
241 TestMultiNode/serial/StopMultiNode 30.46
242 TestMultiNode/serial/RestartMultiNode 46.84
243 TestMultiNode/serial/ValidateNameConflict 23.46
250 TestScheduledStopUnix 96.46
253 TestInsufficientStorage 9.66
254 TestRunningBinaryUpgrade 48.33
256 TestKubernetesUpgrade 310.84
257 TestMissingContainerUpgrade 88.07
259 TestPause/serial/Start 78.54
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
262 TestNoKubernetes/serial/StartWithK8s 35.1
263 TestNoKubernetes/serial/StartWithStopK8s 16.84
264 TestNoKubernetes/serial/Start 4.83
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
266 TestNoKubernetes/serial/ProfileList 3.76
267 TestNoKubernetes/serial/Stop 2.03
268 TestNoKubernetes/serial/StartNoArgs 7.14
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
270 TestPause/serial/SecondStartNoReconfiguration 6.82
272 TestStoppedBinaryUpgrade/Setup 0.54
273 TestStoppedBinaryUpgrade/Upgrade 67.84
274 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
289 TestNetworkPlugins/group/false 3.57
294 TestStartStop/group/old-k8s-version/serial/FirstStart 50.34
296 TestStartStop/group/no-preload/serial/FirstStart 50.05
297 TestStartStop/group/old-k8s-version/serial/DeployApp 8.25
299 TestStartStop/group/no-preload/serial/DeployApp 9.26
300 TestStartStop/group/old-k8s-version/serial/Stop 17.01
302 TestStartStop/group/no-preload/serial/Stop 18.13
304 TestStartStop/group/embed-certs/serial/FirstStart 40.64
305 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
306 TestStartStop/group/old-k8s-version/serial/SecondStart 51.71
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.29
308 TestStartStop/group/no-preload/serial/SecondStart 44.22
309 TestStartStop/group/embed-certs/serial/DeployApp 8.25
311 TestStartStop/group/embed-certs/serial/Stop 16.39
312 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/embed-certs/serial/SecondStart 44.63
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 44.3
325 TestStartStop/group/newest-cni/serial/FirstStart 28.13
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
327 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/Stop 12.52
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.26
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.3
335 TestStartStop/group/newest-cni/serial/SecondStart 12.52
337 TestNetworkPlugins/group/auto/Start 47.66
338 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.39
339 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
343 TestNetworkPlugins/group/kindnet/Start 41.34
344 TestNetworkPlugins/group/calico/Start 50.74
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
346 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.63
347 TestNetworkPlugins/group/auto/KubeletFlags 0.36
348 TestNetworkPlugins/group/auto/NetCatPod 8.28
349 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
350 TestNetworkPlugins/group/auto/DNS 0.15
351 TestNetworkPlugins/group/auto/Localhost 0.11
352 TestNetworkPlugins/group/auto/HairPin 0.12
353 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
354 TestNetworkPlugins/group/kindnet/NetCatPod 8.21
355 TestNetworkPlugins/group/kindnet/DNS 0.13
356 TestNetworkPlugins/group/kindnet/Localhost 0.12
357 TestNetworkPlugins/group/kindnet/HairPin 0.1
358 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
359 TestNetworkPlugins/group/calico/ControllerPod 6.01
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
361 TestNetworkPlugins/group/calico/KubeletFlags 0.37
362 TestNetworkPlugins/group/calico/NetCatPod 10.24
363 TestNetworkPlugins/group/custom-flannel/Start 49.45
364 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
366 TestNetworkPlugins/group/calico/DNS 0.13
367 TestNetworkPlugins/group/calico/Localhost 0.11
368 TestNetworkPlugins/group/calico/HairPin 0.13
369 TestNetworkPlugins/group/enable-default-cni/Start 65.3
370 TestNetworkPlugins/group/flannel/Start 58.15
371 TestNetworkPlugins/group/bridge/Start 63.44
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.26
374 TestNetworkPlugins/group/custom-flannel/DNS 0.12
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
379 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.2
380 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
381 TestNetworkPlugins/group/flannel/NetCatPod 9.23
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
383 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
384 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
385 TestNetworkPlugins/group/flannel/DNS 0.11
386 TestNetworkPlugins/group/flannel/Localhost 0.09
387 TestNetworkPlugins/group/flannel/HairPin 0.09
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
389 TestNetworkPlugins/group/bridge/NetCatPod 8.22
390 TestNetworkPlugins/group/bridge/DNS 0.15
391 TestNetworkPlugins/group/bridge/Localhost 0.09
392 TestNetworkPlugins/group/bridge/HairPin 0.1
x
+
TestDownloadOnly/v1.28.0/json-events (4.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-578604 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-578604 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.572253713s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1101 08:29:11.821284    9414 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1101 08:29:11.821387    9414 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-578604
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-578604: exit status 85 (71.660659ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-578604 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-578604 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:29:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:29:07.302039    9426 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:29:07.302213    9426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:07.302224    9426 out.go:374] Setting ErrFile to fd 2...
	I1101 08:29:07.302230    9426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:07.302442    9426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	W1101 08:29:07.302596    9426 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21835-5913/.minikube/config/config.json: open /home/jenkins/minikube-integration/21835-5913/.minikube/config/config.json: no such file or directory
	I1101 08:29:07.303145    9426 out.go:368] Setting JSON to true
	I1101 08:29:07.304096    9426 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":695,"bootTime":1761985052,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:29:07.304197    9426 start.go:143] virtualization: kvm guest
	I1101 08:29:07.306567    9426 out.go:99] [download-only-578604] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1101 08:29:07.306716    9426 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball: no such file or directory
	I1101 08:29:07.306761    9426 notify.go:221] Checking for updates...
	I1101 08:29:07.307951    9426 out.go:171] MINIKUBE_LOCATION=21835
	I1101 08:29:07.309231    9426 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:29:07.310459    9426 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 08:29:07.311603    9426 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 08:29:07.313044    9426 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1101 08:29:07.315295    9426 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 08:29:07.315576    9426 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:29:07.340752    9426 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 08:29:07.340835    9426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:07.773439    9426 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-01 08:29:07.761730972 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:29:07.773559    9426 docker.go:319] overlay module found
	I1101 08:29:07.775104    9426 out.go:99] Using the docker driver based on user configuration
	I1101 08:29:07.775137    9426 start.go:309] selected driver: docker
	I1101 08:29:07.775145    9426 start.go:930] validating driver "docker" against <nil>
	I1101 08:29:07.775254    9426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:07.843390    9426 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-01 08:29:07.833758962 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:29:07.843549    9426 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:29:07.844086    9426 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1101 08:29:07.844278    9426 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 08:29:07.846146    9426 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-578604 host does not exist
	  To start a cluster, run: "minikube start -p download-only-578604"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-578604
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-292520 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-292520 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.95384479s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1101 08:29:16.236529    9414 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1101 08:29:16.236571    9414 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5913/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-292520
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-292520: exit status 85 (75.716385ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-578604 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-578604 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-578604                                                                                                                                                   │ download-only-578604 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-292520 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-292520 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:29:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:29:12.337151    9778 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:29:12.337810    9778 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:12.337820    9778 out.go:374] Setting ErrFile to fd 2...
	I1101 08:29:12.337825    9778 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:12.338025    9778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:29:12.338503    9778 out.go:368] Setting JSON to true
	I1101 08:29:12.339659    9778 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":700,"bootTime":1761985052,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:29:12.339797    9778 start.go:143] virtualization: kvm guest
	I1101 08:29:12.341750    9778 out.go:99] [download-only-292520] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 08:29:12.341918    9778 notify.go:221] Checking for updates...
	I1101 08:29:12.343430    9778 out.go:171] MINIKUBE_LOCATION=21835
	I1101 08:29:12.345250    9778 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:29:12.346503    9778 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 08:29:12.347889    9778 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 08:29:12.349360    9778 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1101 08:29:12.352233    9778 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 08:29:12.352511    9778 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:29:12.376627    9778 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 08:29:12.376727    9778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:12.434619    9778 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-01 08:29:12.425246883 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:29:12.434744    9778 docker.go:319] overlay module found
	I1101 08:29:12.436128    9778 out.go:99] Using the docker driver based on user configuration
	I1101 08:29:12.436177    9778 start.go:309] selected driver: docker
	I1101 08:29:12.436182    9778 start.go:930] validating driver "docker" against <nil>
	I1101 08:29:12.436257    9778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:29:12.495880    9778 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-01 08:29:12.486254777 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:29:12.496045    9778 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:29:12.496576    9778 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1101 08:29:12.496712    9778 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 08:29:12.498406    9778 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-292520 host does not exist
	  To start a cluster, run: "minikube start -p download-only-292520"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-292520
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-005011 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-005011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-005011
--- PASS: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestBinaryMirror (0.84s)

                                                
                                                
=== RUN   TestBinaryMirror
I1101 08:29:17.425786    9414 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-218549 --alsologtostderr --binary-mirror http://127.0.0.1:39227 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-218549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-218549
--- PASS: TestBinaryMirror (0.84s)

                                                
                                    
x
+
TestOffline (85.7s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-339605 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-339605 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m23.049738201s)
helpers_test.go:175: Cleaning up "offline-crio-339605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-339605
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-339605: (2.654068498s)
--- PASS: TestOffline (85.70s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-491859
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-491859: exit status 85 (69.343212ms)

                                                
                                                
-- stdout --
	* Profile "addons-491859" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-491859"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-491859
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-491859: exit status 85 (73.945323ms)

                                                
                                                
-- stdout --
	* Profile "addons-491859" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-491859"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (144.54s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-491859 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-491859 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m24.54380777s)
--- PASS: TestAddons/Setup (144.54s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-491859 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-491859 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.43s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-491859 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-491859 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2ae16e08-af4f-4803-85d8-2d9acd18bb15] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2ae16e08-af4f-4803-85d8-2d9acd18bb15] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003841639s
addons_test.go:694: (dbg) Run:  kubectl --context addons-491859 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-491859 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-491859 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.43s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.65s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-491859
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-491859: (16.352444571s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-491859
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-491859
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-491859
--- PASS: TestAddons/StoppedEnableDisable (16.65s)

                                                
                                    
x
+
TestCertOptions (30.41s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-403136 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-403136 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.201590978s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-403136 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-403136 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-403136 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-403136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-403136
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-403136: (4.332149602s)
--- PASS: TestCertOptions (30.41s)

                                                
                                    
x
+
TestCertExpiration (209.64s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-303094 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-303094 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (21.320269211s)
E1101 09:16:43.470917    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-303094 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-303094 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.805633692s)
helpers_test.go:175: Cleaning up "cert-expiration-303094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-303094
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-303094: (2.518062752s)
--- PASS: TestCertExpiration (209.64s)

                                                
                                    
x
+
TestForceSystemdFlag (24.4s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-773418 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-773418 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.620147988s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-773418 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-773418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-773418
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-773418: (2.467736774s)
--- PASS: TestForceSystemdFlag (24.40s)

                                                
                                    
x
+
TestForceSystemdEnv (35.66s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-363365 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-363365 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.009913522s)
helpers_test.go:175: Cleaning up "force-systemd-env-363365" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-363365
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-363365: (2.645166033s)
--- PASS: TestForceSystemdEnv (35.66s)

                                                
                                    
x
+
TestErrorSpam/setup (21.38s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-653430 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-653430 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-653430 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-653430 --driver=docker  --container-runtime=crio: (21.381519397s)
--- PASS: TestErrorSpam/setup (21.38s)

                                                
                                    
x
+
TestErrorSpam/start (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 start --dry-run
--- PASS: TestErrorSpam/start (0.69s)

                                                
                                    
x
+
TestErrorSpam/status (0.99s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 status
--- PASS: TestErrorSpam/status (0.99s)

                                                
                                    
x
+
TestErrorSpam/pause (5.95s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 pause: exit status 80 (1.603622104s)

                                                
                                                
-- stdout --
	* Pausing node nospam-653430 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:35:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 pause: exit status 80 (2.102071236s)

                                                
                                                
-- stdout --
	* Pausing node nospam-653430 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:35:22Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 pause: exit status 80 (2.240205736s)

                                                
                                                
-- stdout --
	* Pausing node nospam-653430 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:35:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.95s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 unpause: exit status 80 (1.719432844s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-653430 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:35:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 unpause: exit status 80 (1.879355014s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-653430 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:35:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 unpause: exit status 80 (1.985838442s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-653430 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:35:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.59s)

                                                
                                    
x
+
TestErrorSpam/stop (8.19s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 stop: (7.976324226s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-653430 --log_dir /tmp/nospam-653430 stop
--- PASS: TestErrorSpam/stop (8.19s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21835-5913/.minikube/files/etc/test/nested/copy/9414/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.96s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-290156 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-290156 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (37.955396622s)
--- PASS: TestFunctional/serial/StartWithProxy (37.96s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1101 08:36:20.887569    9414 config.go:182] Loaded profile config "functional-290156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-290156 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-290156 --alsologtostderr -v=8: (6.314765743s)
functional_test.go:678: soft start took 6.317945064s for "functional-290156" cluster.
I1101 08:36:27.205228    9414 config.go:182] Loaded profile config "functional-290156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.32s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-290156 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-290156 cache add registry.k8s.io/pause:3.3: (1.034083324s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-290156 /tmp/TestFunctionalserialCacheCmdcacheadd_local2123320367/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 cache add minikube-local-cache-test:functional-290156
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 cache delete minikube-local-cache-test:functional-290156
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-290156
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290156 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (290.063948ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 kubectl -- --context functional-290156 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-290156 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (60.79s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-290156 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1101 08:36:43.477125    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:36:43.483532    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:36:43.495004    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:36:43.516489    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:36:43.557974    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:36:43.639403    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:36:43.800955    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:36:44.122744    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:36:44.764854    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:36:46.046497    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:36:48.607912    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:36:53.729541    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:03.971043    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:24.452524    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-290156 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m0.784798266s)
functional_test.go:776: restart took 1m0.784994891s for "functional-290156" cluster.
I1101 08:37:34.413030    9414 config.go:182] Loaded profile config "functional-290156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (60.79s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-290156 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-290156 logs: (1.275095391s)
--- PASS: TestFunctional/serial/LogsCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 logs --file /tmp/TestFunctionalserialLogsFileCmd3252007723/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-290156 logs --file /tmp/TestFunctionalserialLogsFileCmd3252007723/001/logs.txt: (1.292803189s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-290156 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-290156
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-290156: exit status 115 (356.905502ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31203 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-290156 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290156 config get cpus: exit status 14 (87.729313ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290156 config get cpus: exit status 14 (73.791076ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-290156 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-290156 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 48267: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-290156 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-290156 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (170.434033ms)

                                                
                                                
-- stdout --
	* [functional-290156] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:38:29.804088   47769 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:38:29.804246   47769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:38:29.804257   47769 out.go:374] Setting ErrFile to fd 2...
	I1101 08:38:29.804263   47769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:38:29.804534   47769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:38:29.804994   47769 out.go:368] Setting JSON to false
	I1101 08:38:29.805998   47769 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1258,"bootTime":1761985052,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:38:29.806053   47769 start.go:143] virtualization: kvm guest
	I1101 08:38:29.808385   47769 out.go:179] * [functional-290156] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 08:38:29.809692   47769 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 08:38:29.809732   47769 notify.go:221] Checking for updates...
	I1101 08:38:29.812487   47769 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:38:29.814014   47769 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 08:38:29.815285   47769 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 08:38:29.816481   47769 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 08:38:29.817786   47769 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:38:29.819425   47769 config.go:182] Loaded profile config "functional-290156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:38:29.819917   47769 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:38:29.843796   47769 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 08:38:29.843899   47769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:38:29.901898   47769 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-01 08:38:29.891332544 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:38:29.902002   47769 docker.go:319] overlay module found
	I1101 08:38:29.904804   47769 out.go:179] * Using the docker driver based on existing profile
	I1101 08:38:29.906000   47769 start.go:309] selected driver: docker
	I1101 08:38:29.906017   47769 start.go:930] validating driver "docker" against &{Name:functional-290156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-290156 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:38:29.906104   47769 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:38:29.907800   47769 out.go:203] 
	W1101 08:38:29.908958   47769 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1101 08:38:29.910285   47769 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-290156 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-290156 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-290156 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (170.27161ms)

                                                
                                                
-- stdout --
	* [functional-290156] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:38:30.196589   47997 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:38:30.196708   47997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:38:30.196715   47997 out.go:374] Setting ErrFile to fd 2...
	I1101 08:38:30.196721   47997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:38:30.197081   47997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:38:30.197570   47997 out.go:368] Setting JSON to false
	I1101 08:38:30.198589   47997 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1258,"bootTime":1761985052,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:38:30.198696   47997 start.go:143] virtualization: kvm guest
	I1101 08:38:30.200833   47997 out.go:179] * [functional-290156] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1101 08:38:30.202265   47997 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 08:38:30.202292   47997 notify.go:221] Checking for updates...
	I1101 08:38:30.205355   47997 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:38:30.206687   47997 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 08:38:30.207882   47997 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 08:38:30.209331   47997 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 08:38:30.210964   47997 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:38:30.212843   47997 config.go:182] Loaded profile config "functional-290156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:38:30.213321   47997 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:38:30.237186   47997 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 08:38:30.237295   47997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:38:30.296309   47997 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-01 08:38:30.285327225 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:38:30.296430   47997 docker.go:319] overlay module found
	I1101 08:38:30.298205   47997 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1101 08:38:30.299643   47997 start.go:309] selected driver: docker
	I1101 08:38:30.299663   47997 start.go:930] validating driver "docker" against &{Name:functional-290156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-290156 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:38:30.299770   47997 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:38:30.301790   47997 out.go:203] 
	W1101 08:38:30.303043   47997 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 08:38:30.304396   47997 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (54.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [a5b40a3c-3698-4ece-921d-eaea0cc60f61] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004415238s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-290156 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-290156 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-290156 get pvc myclaim -o=json
I1101 08:37:47.879524    9414 retry.go:31] will retry after 1.123876038s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:dccc4037-a9d2-4735-8916-8569c7165075 ResourceVersion:608 Generation:0 CreationTimestamp:2025-11-01 08:37:47 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001a76060 VolumeMode:0xc001a76070 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-290156 get pvc myclaim -o=json
I1101 08:37:49.060366    9414 retry.go:31] will retry after 2.485670847s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:dccc4037-a9d2-4735-8916-8569c7165075 ResourceVersion:608 Generation:0 CreationTimestamp:2025-11-01 08:37:47 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001ab2980 VolumeMode:0xc001ab2990 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-290156 get pvc myclaim -o=json
I1101 08:37:51.606440    9414 retry.go:31] will retry after 3.723899477s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:dccc4037-a9d2-4735-8916-8569c7165075 ResourceVersion:608 Generation:0 CreationTimestamp:2025-11-01 08:37:47 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001677d90 VolumeMode:0xc001677da0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-290156 get pvc myclaim -o=json
I1101 08:37:55.387432    9414 retry.go:31] will retry after 9.894093785s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:dccc4037-a9d2-4735-8916-8569c7165075 ResourceVersion:608 Generation:0 CreationTimestamp:2025-11-01 08:37:47 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001ab3b20 VolumeMode:0xc001ab3b30 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
I1101 08:38:00.704509    9414 retry.go:31] will retry after 3.403333363s: Temporary Error: Get "http://10.105.176.244": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-290156 get pvc myclaim -o=json
I1101 08:38:05.340482    9414 retry.go:31] will retry after 13.243028606s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:dccc4037-a9d2-4735-8916-8569c7165075 ResourceVersion:608 Generation:0 CreationTimestamp:2025-11-01 08:37:47 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001b04db0 VolumeMode:0xc001b04dc0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
E1101 08:38:05.414283    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-290156 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-290156 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [3106bd38-ddfb-482a-accb-5aaf2e9b24f8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [3106bd38-ddfb-482a-accb-5aaf2e9b24f8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.005118592s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-290156 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-290156 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-290156 apply -f testdata/storage-provisioner/pod.yaml
I1101 08:38:28.689489    9414 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [969972d7-c9a2-47b6-bee3-f850b24622a9] Pending
helpers_test.go:352: "sp-pod" [969972d7-c9a2-47b6-bee3-f850b24622a9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [969972d7-c9a2-47b6-bee3-f850b24622a9] Running
2025/11/01 08:38:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004092376s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-290156 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (54.17s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh -n functional-290156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 cp functional-290156:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2837773555/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh -n functional-290156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh -n functional-290156 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (15.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-290156 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-twr9j" [07c07db6-e4a9-4dbe-97e7-a6a0d0b1d06d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-twr9j" [07c07db6-e4a9-4dbe-97e7-a6a0d0b1d06d] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 13.004014931s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-290156 exec mysql-5bb876957f-twr9j -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-290156 exec mysql-5bb876957f-twr9j -- mysql -ppassword -e "show databases;": exit status 1 (90.9246ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1101 08:38:50.354214    9414 retry.go:31] will retry after 591.331266ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-290156 exec mysql-5bb876957f-twr9j -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-290156 exec mysql-5bb876957f-twr9j -- mysql -ppassword -e "show databases;": exit status 1 (87.951109ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1101 08:38:51.034294    9414 retry.go:31] will retry after 1.85285363s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-290156 exec mysql-5bb876957f-twr9j -- mysql -ppassword -e "show databases;"
E1101 08:39:27.336152    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:41:43.470933    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:42:11.177510    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:46:43.471175    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (15.88s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9414/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "sudo cat /etc/test/nested/copy/9414/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9414.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "sudo cat /etc/ssl/certs/9414.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9414.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "sudo cat /usr/share/ca-certificates/9414.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/94142.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "sudo cat /etc/ssl/certs/94142.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/94142.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "sudo cat /usr/share/ca-certificates/94142.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-290156 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290156 ssh "sudo systemctl is-active docker": exit status 1 (327.803866ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290156 ssh "sudo systemctl is-active containerd": exit status 1 (328.382528ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-290156 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-290156 image ls --format short --alsologtostderr:
I1101 08:38:38.572005   49279 out.go:360] Setting OutFile to fd 1 ...
I1101 08:38:38.572170   49279 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:38:38.572178   49279 out.go:374] Setting ErrFile to fd 2...
I1101 08:38:38.572184   49279 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:38:38.572511   49279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
I1101 08:38:38.574048   49279 config.go:182] Loaded profile config "functional-290156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:38:38.574288   49279 config.go:182] Loaded profile config "functional-290156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:38:38.575611   49279 cli_runner.go:164] Run: docker container inspect functional-290156 --format={{.State.Status}}
I1101 08:38:38.604356   49279 ssh_runner.go:195] Run: systemctl --version
I1101 08:38:38.604418   49279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-290156
I1101 08:38:38.633989   49279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/functional-290156/id_rsa Username:docker}
I1101 08:38:38.748999   49279 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-290156 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ docker.io/library/nginx                 │ latest             │ 9d0e6f6199dcb │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-290156 image ls --format table --alsologtostderr:
I1101 08:38:39.174331   49384 out.go:360] Setting OutFile to fd 1 ...
I1101 08:38:39.174684   49384 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:38:39.174697   49384 out.go:374] Setting ErrFile to fd 2...
I1101 08:38:39.174704   49384 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:38:39.175052   49384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
I1101 08:38:39.175977   49384 config.go:182] Loaded profile config "functional-290156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:38:39.176146   49384 config.go:182] Loaded profile config "functional-290156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:38:39.176780   49384 cli_runner.go:164] Run: docker container inspect functional-290156 --format={{.State.Status}}
I1101 08:38:39.201787   49384 ssh_runner.go:195] Run: systemctl --version
I1101 08:38:39.201849   49384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-290156
I1101 08:38:39.226088   49384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/functional-290156/id_rsa Username:docker}
I1101 08:38:39.339764   49384 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-290156 image ls --format json --alsologtostderr:
[{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46
fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073
"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTa
gs":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"9d0e6f6199dcb6e045dad103064601d730fcfaf8d1bd357d969fb70bd5b90dec","repoDigests":["docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58","docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f"],"repoTags":["docker.io/library/nginx:latest"],"size":"155489797"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6b
dfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d8
3aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-290156 image ls --format json --alsologtostderr:
I1101 08:38:38.883099   49337 out.go:360] Setting OutFile to fd 1 ...
I1101 08:38:38.883445   49337 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:38:38.883458   49337 out.go:374] Setting ErrFile to fd 2...
I1101 08:38:38.883465   49337 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:38:38.883798   49337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
I1101 08:38:38.884726   49337 config.go:182] Loaded profile config "functional-290156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:38:38.884894   49337 config.go:182] Loaded profile config "functional-290156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:38:38.885453   49337 cli_runner.go:164] Run: docker container inspect functional-290156 --format={{.State.Status}}
I1101 08:38:38.912635   49337 ssh_runner.go:195] Run: systemctl --version
I1101 08:38:38.912715   49337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-290156
I1101 08:38:38.937342   49337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/functional-290156/id_rsa Username:docker}
I1101 08:38:39.051368   49337 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290156 ssh pgrep buildkitd: exit status 1 (326.98093ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 image build -t localhost/my-image:functional-290156 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-290156 image build -t localhost/my-image:functional-290156 testdata/build --alsologtostderr: (3.246507103s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-290156 image build -t localhost/my-image:functional-290156 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d05725cdf74
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-290156
--> 87b9d46a830
Successfully tagged localhost/my-image:functional-290156
87b9d46a8300a27a1e56f174371d12a0b335c800faada918ff5b60474683cb5c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-290156 image build -t localhost/my-image:functional-290156 testdata/build --alsologtostderr:
I1101 08:38:42.084722   49747 out.go:360] Setting OutFile to fd 1 ...
I1101 08:38:42.084910   49747 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:38:42.084922   49747 out.go:374] Setting ErrFile to fd 2...
I1101 08:38:42.084928   49747 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:38:42.085229   49747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
I1101 08:38:42.086065   49747 config.go:182] Loaded profile config "functional-290156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:38:42.086786   49747 config.go:182] Loaded profile config "functional-290156": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:38:42.087404   49747 cli_runner.go:164] Run: docker container inspect functional-290156 --format={{.State.Status}}
I1101 08:38:42.110323   49747 ssh_runner.go:195] Run: systemctl --version
I1101 08:38:42.110381   49747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-290156
I1101 08:38:42.133719   49747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/functional-290156/id_rsa Username:docker}
I1101 08:38:42.245945   49747 build_images.go:162] Building image from path: /tmp/build.37168830.tar
I1101 08:38:42.246032   49747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1101 08:38:42.257054   49747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.37168830.tar
I1101 08:38:42.262514   49747 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.37168830.tar: stat -c "%s %y" /var/lib/minikube/build/build.37168830.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.37168830.tar': No such file or directory
I1101 08:38:42.262561   49747 ssh_runner.go:362] scp /tmp/build.37168830.tar --> /var/lib/minikube/build/build.37168830.tar (3072 bytes)
I1101 08:38:42.286626   49747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.37168830
I1101 08:38:42.297950   49747 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.37168830 -xf /var/lib/minikube/build/build.37168830.tar
I1101 08:38:42.309918   49747 crio.go:315] Building image: /var/lib/minikube/build/build.37168830
I1101 08:38:42.310018   49747 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-290156 /var/lib/minikube/build/build.37168830 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1101 08:38:45.229883   49747 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-290156 /var/lib/minikube/build/build.37168830 --cgroup-manager=cgroupfs: (2.919820186s)
I1101 08:38:45.229957   49747 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.37168830
I1101 08:38:45.238297   49747 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.37168830.tar
I1101 08:38:45.246395   49747 build_images.go:218] Built localhost/my-image:functional-290156 from /tmp/build.37168830.tar
I1101 08:38:45.246432   49747 build_images.go:134] succeeded building to: functional-290156
I1101 08:38:45.246436   49747 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-290156
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-290156 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-290156 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-290156 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-290156 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 42620: os: process already finished
helpers_test.go:519: unable to terminate pid 42328: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-290156 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-290156 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [e93e78a8-c8cc-459c-ab19-ca411f500a09] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [e93e78a8-c8cc-459c-ab19-ca411f500a09] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004282457s
I1101 08:37:50.646825    9414 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 image rm kicbase/echo-server:functional-290156 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-290156 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (26.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.176.244 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (26.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-290156 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
I1101 08:38:18.786685    9414 detect.go:223] nested VM detected
functional_test.go:1330: Took "362.273493ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "66.818434ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "351.854185ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "63.428159ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-290156 /tmp/TestFunctionalparallelMountCmdany-port2748790446/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761986299561379757" to /tmp/TestFunctionalparallelMountCmdany-port2748790446/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761986299561379757" to /tmp/TestFunctionalparallelMountCmdany-port2748790446/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761986299561379757" to /tmp/TestFunctionalparallelMountCmdany-port2748790446/001/test-1761986299561379757
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290156 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (303.179542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 08:38:19.864983    9414 retry.go:31] will retry after 468.798799ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  1 08:38 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  1 08:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  1 08:38 test-1761986299561379757
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh cat /mount-9p/test-1761986299561379757
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-290156 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [48751ce0-0d3a-441c-a6d0-ac331c1a1632] Pending
helpers_test.go:352: "busybox-mount" [48751ce0-0d3a-441c-a6d0-ac331c1a1632] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [48751ce0-0d3a-441c-a6d0-ac331c1a1632] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [48751ce0-0d3a-441c-a6d0-ac331c1a1632] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003736715s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-290156 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-290156 /tmp/TestFunctionalparallelMountCmdany-port2748790446/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-290156 /tmp/TestFunctionalparallelMountCmdspecific-port2525153448/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290156 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (295.754251ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 08:38:25.826576    9414 retry.go:31] will retry after 723.177545ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-290156 /tmp/TestFunctionalparallelMountCmdspecific-port2525153448/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290156 ssh "sudo umount -f /mount-9p": exit status 1 (281.0972ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-290156 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-290156 /tmp/TestFunctionalparallelMountCmdspecific-port2525153448/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-290156 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2812905119/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-290156 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2812905119/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-290156 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2812905119/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-290156 ssh "findmnt -T" /mount1: exit status 1 (363.182414ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 08:38:27.980944    9414 retry.go:31] will retry after 438.344493ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-290156 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-290156 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2812905119/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-290156 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2812905119/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-290156 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2812905119/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-290156 service list: (1.704271094s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-290156 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-290156 service list -o json: (1.706465077s)
functional_test.go:1504: Took "1.70654005s" to run "out/minikube-linux-amd64 -p functional-290156 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-290156
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-290156
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-290156
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (116.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-582205 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m55.794317319s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (116.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-582205 kubectl -- rollout status deployment/busybox: (1.980298746s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- exec busybox-7b57f96db7-c6js9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- exec busybox-7b57f96db7-vnmnr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- exec busybox-7b57f96db7-xzdtb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- exec busybox-7b57f96db7-c6js9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- exec busybox-7b57f96db7-vnmnr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- exec busybox-7b57f96db7-xzdtb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- exec busybox-7b57f96db7-c6js9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- exec busybox-7b57f96db7-vnmnr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- exec busybox-7b57f96db7-xzdtb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- exec busybox-7b57f96db7-c6js9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- exec busybox-7b57f96db7-c6js9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- exec busybox-7b57f96db7-vnmnr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- exec busybox-7b57f96db7-vnmnr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- exec busybox-7b57f96db7-xzdtb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 kubectl -- exec busybox-7b57f96db7-xzdtb -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (25.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-582205 node add --alsologtostderr -v 5: (24.60424339s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (25.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-582205 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp testdata/cp-test.txt ha-582205:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp ha-582205:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2900375956/001/cp-test_ha-582205.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp ha-582205:/home/docker/cp-test.txt ha-582205-m02:/home/docker/cp-test_ha-582205_ha-582205-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m02 "sudo cat /home/docker/cp-test_ha-582205_ha-582205-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp ha-582205:/home/docker/cp-test.txt ha-582205-m03:/home/docker/cp-test_ha-582205_ha-582205-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m03 "sudo cat /home/docker/cp-test_ha-582205_ha-582205-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp ha-582205:/home/docker/cp-test.txt ha-582205-m04:/home/docker/cp-test_ha-582205_ha-582205-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m04 "sudo cat /home/docker/cp-test_ha-582205_ha-582205-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp testdata/cp-test.txt ha-582205-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp ha-582205-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2900375956/001/cp-test_ha-582205-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp ha-582205-m02:/home/docker/cp-test.txt ha-582205:/home/docker/cp-test_ha-582205-m02_ha-582205.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205 "sudo cat /home/docker/cp-test_ha-582205-m02_ha-582205.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp ha-582205-m02:/home/docker/cp-test.txt ha-582205-m03:/home/docker/cp-test_ha-582205-m02_ha-582205-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m03 "sudo cat /home/docker/cp-test_ha-582205-m02_ha-582205-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp ha-582205-m02:/home/docker/cp-test.txt ha-582205-m04:/home/docker/cp-test_ha-582205-m02_ha-582205-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m04 "sudo cat /home/docker/cp-test_ha-582205-m02_ha-582205-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp testdata/cp-test.txt ha-582205-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp ha-582205-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2900375956/001/cp-test_ha-582205-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp ha-582205-m03:/home/docker/cp-test.txt ha-582205:/home/docker/cp-test_ha-582205-m03_ha-582205.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205 "sudo cat /home/docker/cp-test_ha-582205-m03_ha-582205.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp ha-582205-m03:/home/docker/cp-test.txt ha-582205-m02:/home/docker/cp-test_ha-582205-m03_ha-582205-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m02 "sudo cat /home/docker/cp-test_ha-582205-m03_ha-582205-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp ha-582205-m03:/home/docker/cp-test.txt ha-582205-m04:/home/docker/cp-test_ha-582205-m03_ha-582205-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m04 "sudo cat /home/docker/cp-test_ha-582205-m03_ha-582205-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp testdata/cp-test.txt ha-582205-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp ha-582205-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2900375956/001/cp-test_ha-582205-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp ha-582205-m04:/home/docker/cp-test.txt ha-582205:/home/docker/cp-test_ha-582205-m04_ha-582205.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205 "sudo cat /home/docker/cp-test_ha-582205-m04_ha-582205.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp ha-582205-m04:/home/docker/cp-test.txt ha-582205-m02:/home/docker/cp-test_ha-582205-m04_ha-582205-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m02 "sudo cat /home/docker/cp-test_ha-582205-m04_ha-582205-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 cp ha-582205-m04:/home/docker/cp-test.txt ha-582205-m03:/home/docker/cp-test_ha-582205-m04_ha-582205-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 ssh -n ha-582205-m03 "sudo cat /home/docker/cp-test_ha-582205-m04_ha-582205-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-582205 node stop m02 --alsologtostderr -v 5: (19.14416078s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-582205 status --alsologtostderr -v 5: exit status 7 (737.852308ms)

                                                
                                                
-- stdout --
	ha-582205
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-582205-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-582205-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-582205-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:50:58.128504   74420 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:50:58.128785   74420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:50:58.128795   74420 out.go:374] Setting ErrFile to fd 2...
	I1101 08:50:58.128799   74420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:50:58.129048   74420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:50:58.129226   74420 out.go:368] Setting JSON to false
	I1101 08:50:58.129263   74420 mustload.go:66] Loading cluster: ha-582205
	I1101 08:50:58.129310   74420 notify.go:221] Checking for updates...
	I1101 08:50:58.129611   74420 config.go:182] Loaded profile config "ha-582205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:50:58.129624   74420 status.go:174] checking status of ha-582205 ...
	I1101 08:50:58.130065   74420 cli_runner.go:164] Run: docker container inspect ha-582205 --format={{.State.Status}}
	I1101 08:50:58.151178   74420 status.go:371] ha-582205 host status = "Running" (err=<nil>)
	I1101 08:50:58.151210   74420 host.go:66] Checking if "ha-582205" exists ...
	I1101 08:50:58.151536   74420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-582205
	I1101 08:50:58.171773   74420 host.go:66] Checking if "ha-582205" exists ...
	I1101 08:50:58.172091   74420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:50:58.172140   74420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-582205
	I1101 08:50:58.191298   74420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/ha-582205/id_rsa Username:docker}
	I1101 08:50:58.290578   74420 ssh_runner.go:195] Run: systemctl --version
	I1101 08:50:58.297262   74420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:50:58.310541   74420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:50:58.374295   74420 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 08:50:58.362844386 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:50:58.374817   74420 kubeconfig.go:125] found "ha-582205" server: "https://192.168.49.254:8443"
	I1101 08:50:58.374844   74420 api_server.go:166] Checking apiserver status ...
	I1101 08:50:58.374903   74420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:50:58.387373   74420 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1256/cgroup
	W1101 08:50:58.396485   74420 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1256/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 08:50:58.396543   74420 ssh_runner.go:195] Run: ls
	I1101 08:50:58.401169   74420 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 08:50:58.406950   74420 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 08:50:58.406978   74420 status.go:463] ha-582205 apiserver status = Running (err=<nil>)
	I1101 08:50:58.406989   74420 status.go:176] ha-582205 status: &{Name:ha-582205 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 08:50:58.407004   74420 status.go:174] checking status of ha-582205-m02 ...
	I1101 08:50:58.407251   74420 cli_runner.go:164] Run: docker container inspect ha-582205-m02 --format={{.State.Status}}
	I1101 08:50:58.426397   74420 status.go:371] ha-582205-m02 host status = "Stopped" (err=<nil>)
	I1101 08:50:58.426419   74420 status.go:384] host is not running, skipping remaining checks
	I1101 08:50:58.426425   74420 status.go:176] ha-582205-m02 status: &{Name:ha-582205-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 08:50:58.426445   74420 status.go:174] checking status of ha-582205-m03 ...
	I1101 08:50:58.426682   74420 cli_runner.go:164] Run: docker container inspect ha-582205-m03 --format={{.State.Status}}
	I1101 08:50:58.446308   74420 status.go:371] ha-582205-m03 host status = "Running" (err=<nil>)
	I1101 08:50:58.446334   74420 host.go:66] Checking if "ha-582205-m03" exists ...
	I1101 08:50:58.446613   74420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-582205-m03
	I1101 08:50:58.465664   74420 host.go:66] Checking if "ha-582205-m03" exists ...
	I1101 08:50:58.465956   74420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:50:58.465995   74420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-582205-m03
	I1101 08:50:58.485184   74420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/ha-582205-m03/id_rsa Username:docker}
	I1101 08:50:58.584708   74420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:50:58.598270   74420 kubeconfig.go:125] found "ha-582205" server: "https://192.168.49.254:8443"
	I1101 08:50:58.598298   74420 api_server.go:166] Checking apiserver status ...
	I1101 08:50:58.598334   74420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:50:58.609905   74420 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1168/cgroup
	W1101 08:50:58.619120   74420 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1168/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 08:50:58.619180   74420 ssh_runner.go:195] Run: ls
	I1101 08:50:58.623534   74420 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 08:50:58.627771   74420 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 08:50:58.627796   74420 status.go:463] ha-582205-m03 apiserver status = Running (err=<nil>)
	I1101 08:50:58.627804   74420 status.go:176] ha-582205-m03 status: &{Name:ha-582205-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 08:50:58.627817   74420 status.go:174] checking status of ha-582205-m04 ...
	I1101 08:50:58.628082   74420 cli_runner.go:164] Run: docker container inspect ha-582205-m04 --format={{.State.Status}}
	I1101 08:50:58.647062   74420 status.go:371] ha-582205-m04 host status = "Running" (err=<nil>)
	I1101 08:50:58.647086   74420 host.go:66] Checking if "ha-582205-m04" exists ...
	I1101 08:50:58.647329   74420 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-582205-m04
	I1101 08:50:58.665695   74420 host.go:66] Checking if "ha-582205-m04" exists ...
	I1101 08:50:58.665993   74420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:50:58.666035   74420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-582205-m04
	I1101 08:50:58.684759   74420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/ha-582205-m04/id_rsa Username:docker}
	I1101 08:50:58.786284   74420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:50:58.799761   74420 status.go:176] ha-582205-m04 status: &{Name:ha-582205-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-582205 node start m02 --alsologtostderr -v 5: (13.523856695s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (196.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 stop --alsologtostderr -v 5
E1101 08:51:43.472109    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-582205 stop --alsologtostderr -v 5: (50.024548251s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 start --wait true --alsologtostderr -v 5
E1101 08:52:41.360203    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:52:41.366611    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:52:41.378029    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:52:41.399451    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:52:41.440850    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:52:41.522356    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:52:41.683988    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:52:42.005732    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:52:42.647794    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:52:43.930114    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:52:46.492009    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:52:51.613364    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:53:01.855001    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:53:06.541468    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:53:22.336625    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:54:03.298701    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-582205 start --wait true --alsologtostderr -v 5: (2m26.653221702s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (196.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-582205 node delete m03 --alsologtostderr -v 5: (9.830547115s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-582205 stop --alsologtostderr -v 5: (32.625961815s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-582205 status --alsologtostderr -v 5: exit status 7 (125.71638ms)

                                                
                                                
-- stdout --
	ha-582205
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-582205-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-582205-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:55:15.863215   88694 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:55:15.863499   88694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:55:15.863507   88694 out.go:374] Setting ErrFile to fd 2...
	I1101 08:55:15.863512   88694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:55:15.863694   88694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 08:55:15.863888   88694 out.go:368] Setting JSON to false
	I1101 08:55:15.863918   88694 mustload.go:66] Loading cluster: ha-582205
	I1101 08:55:15.864034   88694 notify.go:221] Checking for updates...
	I1101 08:55:15.864514   88694 config.go:182] Loaded profile config "ha-582205": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:55:15.864529   88694 status.go:174] checking status of ha-582205 ...
	I1101 08:55:15.864973   88694 cli_runner.go:164] Run: docker container inspect ha-582205 --format={{.State.Status}}
	I1101 08:55:15.886970   88694 status.go:371] ha-582205 host status = "Stopped" (err=<nil>)
	I1101 08:55:15.886998   88694 status.go:384] host is not running, skipping remaining checks
	I1101 08:55:15.887005   88694 status.go:176] ha-582205 status: &{Name:ha-582205 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 08:55:15.887027   88694 status.go:174] checking status of ha-582205-m02 ...
	I1101 08:55:15.887309   88694 cli_runner.go:164] Run: docker container inspect ha-582205-m02 --format={{.State.Status}}
	I1101 08:55:15.905650   88694 status.go:371] ha-582205-m02 host status = "Stopped" (err=<nil>)
	I1101 08:55:15.905675   88694 status.go:384] host is not running, skipping remaining checks
	I1101 08:55:15.905681   88694 status.go:176] ha-582205-m02 status: &{Name:ha-582205-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 08:55:15.905701   88694 status.go:174] checking status of ha-582205-m04 ...
	I1101 08:55:15.905980   88694 cli_runner.go:164] Run: docker container inspect ha-582205-m04 --format={{.State.Status}}
	I1101 08:55:15.924606   88694 status.go:371] ha-582205-m04 host status = "Stopped" (err=<nil>)
	I1101 08:55:15.924643   88694 status.go:384] host is not running, skipping remaining checks
	I1101 08:55:15.924653   88694 status.go:176] ha-582205-m04 status: &{Name:ha-582205-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (58.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1101 08:55:25.220060    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-582205 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (57.521999607s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (58.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 node add --control-plane --alsologtostderr -v 5
E1101 08:56:43.470637    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-582205 node add --control-plane --alsologtostderr -v 5: (34.719968706s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-582205 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-730531 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1101 08:57:41.358787    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-730531 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (41.460424678s)
--- PASS: TestJSONOutput/start/Command (41.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.19s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-730531 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-730531 --output=json --user=testUser: (6.192342241s)
--- PASS: TestJSONOutput/stop/Command (6.19s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-558473 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-558473 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (75.707389ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5894d3b8-2be7-4d87-b767-d3ac9c2eb9d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-558473] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"33d264fe-a460-465e-b640-5a0dbd1ca59a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21835"}}
	{"specversion":"1.0","id":"3aa52c2a-d4a1-4312-bc7c-386c6e32ed37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e5b58bf7-9f58-47ae-8267-0e4bc8004af9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig"}}
	{"specversion":"1.0","id":"4d4844fc-a815-4953-8cc2-1992776dc8da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube"}}
	{"specversion":"1.0","id":"14d26594-1c33-4b60-a591-9123b9975b1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f35ccc39-f760-4694-9c23-2582612316ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cabf0ad3-fa41-4d3a-98e9-d2880e25e29e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-558473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-558473
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.96s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-395861 --network=
E1101 08:58:09.066030    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-395861 --network=: (26.766199664s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-395861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-395861
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-395861: (2.169886872s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.96s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.56s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-522776 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-522776 --network=bridge: (21.542087837s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-522776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-522776
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-522776: (1.999009113s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.56s)

                                                
                                    
x
+
TestKicExistingNetwork (27.09s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1101 08:58:48.656993    9414 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1101 08:58:48.674814    9414 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1101 08:58:48.674904    9414 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1101 08:58:48.674922    9414 cli_runner.go:164] Run: docker network inspect existing-network
W1101 08:58:48.692500    9414 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1101 08:58:48.692532    9414 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1101 08:58:48.692547    9414 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1101 08:58:48.692688    9414 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1101 08:58:48.711098    9414 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5f44df6b5a5b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:38:92:20:b3:ae} reservation:<nil>}
I1101 08:58:48.711512    9414 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eabe10}
I1101 08:58:48.711545    9414 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1101 08:58:48.711603    9414 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1101 08:58:48.771614    9414 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-636761 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-636761 --network=existing-network: (24.899042243s)
helpers_test.go:175: Cleaning up "existing-network-636761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-636761
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-636761: (2.040243907s)
I1101 08:59:15.729245    9414 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (27.09s)

                                                
                                    
x
+
TestKicCustomSubnet (27.82s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-737461 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-737461 --subnet=192.168.60.0/24: (25.579396401s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-737461 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-737461" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-737461
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-737461: (2.221882569s)
--- PASS: TestKicCustomSubnet (27.82s)

                                                
                                    
x
+
TestKicStaticIP (26.77s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-707768 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-707768 --static-ip=192.168.200.200: (24.404028594s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-707768 ip
helpers_test.go:175: Cleaning up "static-ip-707768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-707768
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-707768: (2.211757515s)
--- PASS: TestKicStaticIP (26.77s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (49.15s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-891814 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-891814 --driver=docker  --container-runtime=crio: (20.885092422s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-894661 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-894661 --driver=docker  --container-runtime=crio: (22.085329714s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-891814
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-894661
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-894661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-894661
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-894661: (2.445299362s)
helpers_test.go:175: Cleaning up "first-891814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-891814
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-891814: (2.417811286s)
--- PASS: TestMinikubeProfile (49.15s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-050262 --memory=3072 --mount-string /tmp/TestMountStartserial3735157398/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-050262 --memory=3072 --mount-string /tmp/TestMountStartserial3735157398/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.641066161s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-050262 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-067074 --memory=3072 --mount-string /tmp/TestMountStartserial3735157398/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-067074 --memory=3072 --mount-string /tmp/TestMountStartserial3735157398/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.342436357s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-067074 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.76s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-050262 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-050262 --alsologtostderr -v=5: (1.764361209s)
--- PASS: TestMountStart/serial/DeleteFirst (1.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-067074 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-067074
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-067074: (1.264719121s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.45s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-067074
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-067074: (6.44536872s)
--- PASS: TestMountStart/serial/RestartStopped (7.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-067074 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (69.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-548731 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1101 09:01:43.471641    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-548731 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m9.365588703s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (69.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548731 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548731 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-548731 -- rollout status deployment/busybox: (2.105638975s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548731 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548731 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548731 -- exec busybox-7b57f96db7-k878f -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548731 -- exec busybox-7b57f96db7-zqd88 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548731 -- exec busybox-7b57f96db7-k878f -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548731 -- exec busybox-7b57f96db7-zqd88 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548731 -- exec busybox-7b57f96db7-k878f -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548731 -- exec busybox-7b57f96db7-zqd88 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.63s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548731 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548731 -- exec busybox-7b57f96db7-k878f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548731 -- exec busybox-7b57f96db7-k878f -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548731 -- exec busybox-7b57f96db7-zqd88 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-548731 -- exec busybox-7b57f96db7-zqd88 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-548731 -v=5 --alsologtostderr
E1101 09:02:41.354677    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-548731 -v=5 --alsologtostderr: (23.304518519s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.96s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-548731 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 cp testdata/cp-test.txt multinode-548731:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 ssh -n multinode-548731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 cp multinode-548731:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2891961087/001/cp-test_multinode-548731.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 ssh -n multinode-548731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 cp multinode-548731:/home/docker/cp-test.txt multinode-548731-m02:/home/docker/cp-test_multinode-548731_multinode-548731-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 ssh -n multinode-548731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 ssh -n multinode-548731-m02 "sudo cat /home/docker/cp-test_multinode-548731_multinode-548731-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 cp multinode-548731:/home/docker/cp-test.txt multinode-548731-m03:/home/docker/cp-test_multinode-548731_multinode-548731-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 ssh -n multinode-548731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 ssh -n multinode-548731-m03 "sudo cat /home/docker/cp-test_multinode-548731_multinode-548731-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 cp testdata/cp-test.txt multinode-548731-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 ssh -n multinode-548731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 cp multinode-548731-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2891961087/001/cp-test_multinode-548731-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 ssh -n multinode-548731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 cp multinode-548731-m02:/home/docker/cp-test.txt multinode-548731:/home/docker/cp-test_multinode-548731-m02_multinode-548731.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 ssh -n multinode-548731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 ssh -n multinode-548731 "sudo cat /home/docker/cp-test_multinode-548731-m02_multinode-548731.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 cp multinode-548731-m02:/home/docker/cp-test.txt multinode-548731-m03:/home/docker/cp-test_multinode-548731-m02_multinode-548731-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 ssh -n multinode-548731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 ssh -n multinode-548731-m03 "sudo cat /home/docker/cp-test_multinode-548731-m02_multinode-548731-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 cp testdata/cp-test.txt multinode-548731-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 ssh -n multinode-548731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 cp multinode-548731-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2891961087/001/cp-test_multinode-548731-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 ssh -n multinode-548731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 cp multinode-548731-m03:/home/docker/cp-test.txt multinode-548731:/home/docker/cp-test_multinode-548731-m03_multinode-548731.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 ssh -n multinode-548731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 ssh -n multinode-548731 "sudo cat /home/docker/cp-test_multinode-548731-m03_multinode-548731.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 cp multinode-548731-m03:/home/docker/cp-test.txt multinode-548731-m02:/home/docker/cp-test_multinode-548731-m03_multinode-548731-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 ssh -n multinode-548731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 ssh -n multinode-548731-m02 "sudo cat /home/docker/cp-test_multinode-548731-m03_multinode-548731-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-548731 node stop m03: (1.275091448s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-548731 status: exit status 7 (506.777694ms)

                                                
                                                
-- stdout --
	multinode-548731
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-548731-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-548731-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-548731 status --alsologtostderr: exit status 7 (506.570078ms)

                                                
                                                
-- stdout --
	multinode-548731
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-548731-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-548731-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:03:15.752107  148272 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:03:15.752377  148272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:03:15.752387  148272 out.go:374] Setting ErrFile to fd 2...
	I1101 09:03:15.752391  148272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:03:15.752568  148272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:03:15.752756  148272 out.go:368] Setting JSON to false
	I1101 09:03:15.752853  148272 mustload.go:66] Loading cluster: multinode-548731
	I1101 09:03:15.752921  148272 notify.go:221] Checking for updates...
	I1101 09:03:15.753302  148272 config.go:182] Loaded profile config "multinode-548731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:03:15.753319  148272 status.go:174] checking status of multinode-548731 ...
	I1101 09:03:15.753790  148272 cli_runner.go:164] Run: docker container inspect multinode-548731 --format={{.State.Status}}
	I1101 09:03:15.774906  148272 status.go:371] multinode-548731 host status = "Running" (err=<nil>)
	I1101 09:03:15.774936  148272 host.go:66] Checking if "multinode-548731" exists ...
	I1101 09:03:15.775284  148272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-548731
	I1101 09:03:15.793964  148272 host.go:66] Checking if "multinode-548731" exists ...
	I1101 09:03:15.794211  148272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:03:15.794244  148272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548731
	I1101 09:03:15.811749  148272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/multinode-548731/id_rsa Username:docker}
	I1101 09:03:15.909366  148272 ssh_runner.go:195] Run: systemctl --version
	I1101 09:03:15.915657  148272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:03:15.927670  148272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:03:15.986242  148272 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-01 09:03:15.975580869 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:03:15.986783  148272 kubeconfig.go:125] found "multinode-548731" server: "https://192.168.67.2:8443"
	I1101 09:03:15.986814  148272 api_server.go:166] Checking apiserver status ...
	I1101 09:03:15.986846  148272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:03:15.998520  148272 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1251/cgroup
	W1101 09:03:16.007274  148272 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1251/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:03:16.007318  148272 ssh_runner.go:195] Run: ls
	I1101 09:03:16.011026  148272 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1101 09:03:16.016180  148272 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1101 09:03:16.016205  148272 status.go:463] multinode-548731 apiserver status = Running (err=<nil>)
	I1101 09:03:16.016217  148272 status.go:176] multinode-548731 status: &{Name:multinode-548731 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:03:16.016236  148272 status.go:174] checking status of multinode-548731-m02 ...
	I1101 09:03:16.016479  148272 cli_runner.go:164] Run: docker container inspect multinode-548731-m02 --format={{.State.Status}}
	I1101 09:03:16.034449  148272 status.go:371] multinode-548731-m02 host status = "Running" (err=<nil>)
	I1101 09:03:16.034473  148272 host.go:66] Checking if "multinode-548731-m02" exists ...
	I1101 09:03:16.034743  148272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-548731-m02
	I1101 09:03:16.052316  148272 host.go:66] Checking if "multinode-548731-m02" exists ...
	I1101 09:03:16.052609  148272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:03:16.052658  148272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-548731-m02
	I1101 09:03:16.070877  148272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21835-5913/.minikube/machines/multinode-548731-m02/id_rsa Username:docker}
	I1101 09:03:16.168454  148272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:03:16.181056  148272 status.go:176] multinode-548731-m02 status: &{Name:multinode-548731-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:03:16.181091  148272 status.go:174] checking status of multinode-548731-m03 ...
	I1101 09:03:16.181350  148272 cli_runner.go:164] Run: docker container inspect multinode-548731-m03 --format={{.State.Status}}
	I1101 09:03:16.198788  148272 status.go:371] multinode-548731-m03 host status = "Stopped" (err=<nil>)
	I1101 09:03:16.198809  148272 status.go:384] host is not running, skipping remaining checks
	I1101 09:03:16.198819  148272 status.go:176] multinode-548731-m03 status: &{Name:multinode-548731-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-548731 node start m03 -v=5 --alsologtostderr: (6.950010754s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (75.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-548731
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-548731
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-548731: (31.410861958s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-548731 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-548731 --wait=true -v=5 --alsologtostderr: (44.170847401s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-548731
--- PASS: TestMultiNode/serial/RestartKeepsNodes (75.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-548731 node delete m03: (4.686709328s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-548731 stop: (30.262944729s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-548731 status: exit status 7 (99.040948ms)

                                                
                                                
-- stdout --
	multinode-548731
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-548731-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-548731 status --alsologtostderr: exit status 7 (100.007074ms)

                                                
                                                
-- stdout --
	multinode-548731
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-548731-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:05:15.315399  157936 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:05:15.315671  157936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:05:15.315681  157936 out.go:374] Setting ErrFile to fd 2...
	I1101 09:05:15.315685  157936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:05:15.315966  157936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:05:15.316209  157936 out.go:368] Setting JSON to false
	I1101 09:05:15.316250  157936 mustload.go:66] Loading cluster: multinode-548731
	I1101 09:05:15.316342  157936 notify.go:221] Checking for updates...
	I1101 09:05:15.316751  157936 config.go:182] Loaded profile config "multinode-548731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:05:15.316777  157936 status.go:174] checking status of multinode-548731 ...
	I1101 09:05:15.317383  157936 cli_runner.go:164] Run: docker container inspect multinode-548731 --format={{.State.Status}}
	I1101 09:05:15.336619  157936 status.go:371] multinode-548731 host status = "Stopped" (err=<nil>)
	I1101 09:05:15.336645  157936 status.go:384] host is not running, skipping remaining checks
	I1101 09:05:15.336652  157936 status.go:176] multinode-548731 status: &{Name:multinode-548731 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:05:15.336710  157936 status.go:174] checking status of multinode-548731-m02 ...
	I1101 09:05:15.337047  157936 cli_runner.go:164] Run: docker container inspect multinode-548731-m02 --format={{.State.Status}}
	I1101 09:05:15.356544  157936 status.go:371] multinode-548731-m02 host status = "Stopped" (err=<nil>)
	I1101 09:05:15.356565  157936 status.go:384] host is not running, skipping remaining checks
	I1101 09:05:15.356571  157936 status.go:176] multinode-548731-m02 status: &{Name:multinode-548731-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-548731 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-548731 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (46.217893549s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-548731 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.84s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-548731
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-548731-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-548731-m02 --driver=docker  --container-runtime=crio: exit status 14 (75.300914ms)

                                                
                                                
-- stdout --
	* [multinode-548731-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-548731-m02' is duplicated with machine name 'multinode-548731-m02' in profile 'multinode-548731'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-548731-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-548731-m03 --driver=docker  --container-runtime=crio: (20.529595658s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-548731
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-548731: exit status 80 (299.005957ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-548731 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-548731-m03 already exists in multinode-548731-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-548731-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-548731-m03: (2.497433172s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.46s)

                                                
                                    
x
+
TestScheduledStopUnix (96.46s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-109876 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-109876 --memory=3072 --driver=docker  --container-runtime=crio: (20.193769442s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-109876 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-109876 -n scheduled-stop-109876
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-109876 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1101 09:13:56.448387    9414 retry.go:31] will retry after 109.445µs: open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/scheduled-stop-109876/pid: no such file or directory
I1101 09:13:56.449571    9414 retry.go:31] will retry after 98.529µs: open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/scheduled-stop-109876/pid: no such file or directory
I1101 09:13:56.450765    9414 retry.go:31] will retry after 136.851µs: open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/scheduled-stop-109876/pid: no such file or directory
I1101 09:13:56.451916    9414 retry.go:31] will retry after 191.936µs: open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/scheduled-stop-109876/pid: no such file or directory
I1101 09:13:56.453047    9414 retry.go:31] will retry after 535.478µs: open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/scheduled-stop-109876/pid: no such file or directory
I1101 09:13:56.454187    9414 retry.go:31] will retry after 1.011596ms: open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/scheduled-stop-109876/pid: no such file or directory
I1101 09:13:56.455354    9414 retry.go:31] will retry after 1.154328ms: open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/scheduled-stop-109876/pid: no such file or directory
I1101 09:13:56.457612    9414 retry.go:31] will retry after 1.811982ms: open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/scheduled-stop-109876/pid: no such file or directory
I1101 09:13:56.459806    9414 retry.go:31] will retry after 2.215912ms: open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/scheduled-stop-109876/pid: no such file or directory
I1101 09:13:56.463097    9414 retry.go:31] will retry after 3.770367ms: open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/scheduled-stop-109876/pid: no such file or directory
I1101 09:13:56.467348    9414 retry.go:31] will retry after 3.559708ms: open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/scheduled-stop-109876/pid: no such file or directory
I1101 09:13:56.471610    9414 retry.go:31] will retry after 11.166409ms: open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/scheduled-stop-109876/pid: no such file or directory
I1101 09:13:56.483927    9414 retry.go:31] will retry after 15.826179ms: open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/scheduled-stop-109876/pid: no such file or directory
I1101 09:13:56.500177    9414 retry.go:31] will retry after 23.388508ms: open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/scheduled-stop-109876/pid: no such file or directory
I1101 09:13:56.524526    9414 retry.go:31] will retry after 27.945213ms: open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/scheduled-stop-109876/pid: no such file or directory
I1101 09:13:56.552824    9414 retry.go:31] will retry after 36.200657ms: open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/scheduled-stop-109876/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-109876 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-109876 -n scheduled-stop-109876
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-109876
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-109876 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-109876
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-109876: exit status 7 (81.187905ms)

                                                
                                                
-- stdout --
	scheduled-stop-109876
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-109876 -n scheduled-stop-109876
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-109876 -n scheduled-stop-109876: exit status 7 (81.388281ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-109876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-109876
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-109876: (4.717595837s)
--- PASS: TestScheduledStopUnix (96.46s)

                                                
                                    
x
+
TestInsufficientStorage (9.66s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-756482 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-756482 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.091879146s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c48436b2-f763-419e-8ed7-b8508b166c4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-756482] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eea54a56-5999-400a-b340-1d0ccb3c7bd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21835"}}
	{"specversion":"1.0","id":"4241900e-2033-4a8e-bcaf-da3297cd6821","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"09829414-335a-4bf3-a0d7-23aa5a8bf6fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig"}}
	{"specversion":"1.0","id":"0a711a58-7701-4b49-a4dc-02016a1730be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube"}}
	{"specversion":"1.0","id":"221d7cbd-c22d-4b71-a860-901f2ae9cf3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ccbf1941-c89b-475a-bd55-792ddbaf0544","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"70b93a83-ae89-4780-8d08-4155c5c01798","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8c6e8bc0-fc7f-412e-bba4-6a89be39a2f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4f83a649-0ab1-4129-adff-9fb1fd11ac4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d8914200-055e-400c-93d1-55be3aa81876","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"862b77f2-0cdb-408b-9e09-21068cddd8bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-756482\" primary control-plane node in \"insufficient-storage-756482\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b68a11a8-18a9-471e-b26d-07cb3e1becda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ade480fa-ed38-45bd-900b-60aa2571d0ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"75578a01-6194-4e9d-825c-3a5d01f3e2a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-756482 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-756482 --output=json --layout=cluster: exit status 7 (301.842732ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-756482","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-756482","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 09:15:19.635318  179551 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-756482" does not appear in /home/jenkins/minikube-integration/21835-5913/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-756482 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-756482 --output=json --layout=cluster: exit status 7 (301.23513ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-756482","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-756482","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 09:15:19.936937  179663 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-756482" does not appear in /home/jenkins/minikube-integration/21835-5913/kubeconfig
	E1101 09:15:19.948204  179663 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/insufficient-storage-756482/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-756482" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-756482
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-756482: (1.967222317s)
--- PASS: TestInsufficientStorage (9.66s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (48.33s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.906304413 start -p running-upgrade-274843 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.906304413 start -p running-upgrade-274843 --memory=3072 --vm-driver=docker  --container-runtime=crio: (23.414753418s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-274843 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-274843 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.882853282s)
helpers_test.go:175: Cleaning up "running-upgrade-274843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-274843
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-274843: (2.488726488s)
--- PASS: TestRunningBinaryUpgrade (48.33s)

                                                
                                    
x
+
TestKubernetesUpgrade (310.84s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-846924 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-846924 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.704893549s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-846924
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-846924: (1.373948979s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-846924 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-846924 status --format={{.Host}}: exit status 7 (106.193226ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-846924 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-846924 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.354070472s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-846924 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-846924 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-846924 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (102.342409ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-846924] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-846924
	    minikube start -p kubernetes-upgrade-846924 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8469242 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-846924 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-846924 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-846924 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.576341111s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-846924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-846924
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-846924: (2.544377233s)
--- PASS: TestKubernetesUpgrade (310.84s)

                                                
                                    
x
+
TestMissingContainerUpgrade (88.07s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2294126316 start -p missing-upgrade-505730 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2294126316 start -p missing-upgrade-505730 --memory=3072 --driver=docker  --container-runtime=crio: (42.525519151s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-505730
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-505730: (1.962462947s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-505730
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-505730 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-505730 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.580763776s)
helpers_test.go:175: Cleaning up "missing-upgrade-505730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-505730
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-505730: (2.557657327s)
--- PASS: TestMissingContainerUpgrade (88.07s)

                                                
                                    
x
+
TestPause/serial/Start (78.54s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-349394 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-349394 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m18.540053808s)
--- PASS: TestPause/serial/Start (78.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-413481 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-413481 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (99.786407ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-413481] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-413481 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-413481 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.718214124s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-413481 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-413481 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-413481 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (14.448291579s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-413481 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-413481 status -o json: exit status 2 (337.488413ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-413481","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-413481
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-413481: (2.051397196s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-413481 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-413481 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.834103355s)
--- PASS: TestNoKubernetes/serial/Start (4.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-413481 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-413481 "sudo systemctl is-active --quiet service kubelet": exit status 1 (298.164389ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (2.764283717s)
--- PASS: TestNoKubernetes/serial/ProfileList (3.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-413481
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-413481: (2.025646803s)
--- PASS: TestNoKubernetes/serial/Stop (2.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-413481 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-413481 --driver=docker  --container-runtime=crio: (7.14372939s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-413481 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-413481 "sudo systemctl is-active --quiet service kubelet": exit status 1 (301.987951ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.82s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-349394 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-349394 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.79861234s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (67.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1295315823 start -p stopped-upgrade-434419 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1295315823 start -p stopped-upgrade-434419 --memory=3072 --vm-driver=docker  --container-runtime=crio: (50.306171362s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1295315823 -p stopped-upgrade-434419 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1295315823 -p stopped-upgrade-434419 stop: (2.089556733s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-434419 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1101 09:17:41.355196    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-434419 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (15.439646545s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (67.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-434419
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-204434 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-204434 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (169.540839ms)

                                                
                                                
-- stdout --
	* [false-204434] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:18:31.780766  228776 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:18:31.781255  228776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:18:31.781271  228776 out.go:374] Setting ErrFile to fd 2...
	I1101 09:18:31.781278  228776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:18:31.781714  228776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5913/.minikube/bin
	I1101 09:18:31.782407  228776 out.go:368] Setting JSON to false
	I1101 09:18:31.783624  228776 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3660,"bootTime":1761985052,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:18:31.783683  228776 start.go:143] virtualization: kvm guest
	I1101 09:18:31.785549  228776 out.go:179] * [false-204434] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:18:31.787205  228776 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:18:31.787222  228776 notify.go:221] Checking for updates...
	I1101 09:18:31.789467  228776 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:18:31.790661  228776 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5913/kubeconfig
	I1101 09:18:31.792077  228776 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5913/.minikube
	I1101 09:18:31.793455  228776 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:18:31.794744  228776 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:18:31.796587  228776 config.go:182] Loaded profile config "cert-expiration-303094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:18:31.796751  228776 config.go:182] Loaded profile config "kubernetes-upgrade-846924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:18:31.796914  228776 config.go:182] Loaded profile config "running-upgrade-274843": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 09:18:31.797039  228776 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:18:31.820711  228776 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:18:31.820854  228776 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:18:31.881566  228776 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 09:18:31.871178442 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:18:31.881676  228776 docker.go:319] overlay module found
	I1101 09:18:31.883752  228776 out.go:179] * Using the docker driver based on user configuration
	I1101 09:18:31.885012  228776 start.go:309] selected driver: docker
	I1101 09:18:31.885030  228776 start.go:930] validating driver "docker" against <nil>
	I1101 09:18:31.885043  228776 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:18:31.887030  228776 out.go:203] 
	W1101 09:18:31.888349  228776 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1101 09:18:31.889610  228776 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-204434 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-204434

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-204434

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-204434

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-204434

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-204434

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-204434

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-204434

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-204434

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-204434

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-204434

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-204434

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-204434" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-204434" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:16:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-303094
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:17:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-846924
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:18:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: running-upgrade-274843
contexts:
- context:
cluster: cert-expiration-303094
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:16:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-303094
name: cert-expiration-303094
- context:
cluster: kubernetes-upgrade-846924
user: kubernetes-upgrade-846924
name: kubernetes-upgrade-846924
- context:
cluster: running-upgrade-274843
user: running-upgrade-274843
name: running-upgrade-274843
current-context: running-upgrade-274843
kind: Config
users:
- name: cert-expiration-303094
user:
client-certificate: /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/cert-expiration-303094/client.crt
client-key: /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/cert-expiration-303094/client.key
- name: kubernetes-upgrade-846924
user:
client-certificate: /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924/client.crt
client-key: /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924/client.key
- name: running-upgrade-274843
user:
client-certificate: /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/running-upgrade-274843/client.crt
client-key: /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/running-upgrade-274843/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-204434

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-204434"

                                                
                                                
----------------------- debugLogs end: false-204434 [took: 3.229538154s] --------------------------------
helpers_test.go:175: Cleaning up "false-204434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-204434
--- PASS: TestNetworkPlugins/group/false (3.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (50.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-152344 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-152344 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.33707351s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (50.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (50.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.053031566s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (50.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-152344 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9fab4cbc-fb02-4d8c-a42d-3898aed47002] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9fab4cbc-fb02-4d8c-a42d-3898aed47002] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003353466s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-152344 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-397460 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [943cd842-e356-47ea-82aa-89be0c4ca0ca] Pending
helpers_test.go:352: "busybox" [943cd842-e356-47ea-82aa-89be0c4ca0ca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [943cd842-e356-47ea-82aa-89be0c4ca0ca] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003775831s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-397460 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-152344 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-152344 --alsologtostderr -v=3: (17.012624957s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (17.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-397460 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-397460 --alsologtostderr -v=3: (18.125988222s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-236314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-236314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.63868079s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-152344 -n old-k8s-version-152344
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-152344 -n old-k8s-version-152344: exit status 7 (84.004182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-152344 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-152344 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-152344 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.377930904s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-152344 -n old-k8s-version-152344
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397460 -n no-preload-397460
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397460 -n no-preload-397460: exit status 7 (132.655695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-397460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (44.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-397460 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.883499603s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-397460 -n no-preload-397460
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (44.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-236314 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6e751a41-58d1-4511-8037-a88d0dc71611] Pending
helpers_test.go:352: "busybox" [6e751a41-58d1-4511-8037-a88d0dc71611] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6e751a41-58d1-4511-8037-a88d0dc71611] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003711687s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-236314 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-236314 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-236314 --alsologtostderr -v=3: (16.391647361s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-qjl6t" [827e6d08-5ed0-451b-84d3-91922812871c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00317363s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-89s5g" [0656e39c-eaf6-4c88-9863-16f00e262508] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003360443s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-qjl6t" [827e6d08-5ed0-451b-84d3-91922812871c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004431586s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-152344 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-89s5g" [0656e39c-eaf6-4c88-9863-16f00e262508] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003568943s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-397460 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-236314 -n embed-certs-236314
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-236314 -n embed-certs-236314: exit status 7 (83.44372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-236314 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-236314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-236314 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (44.161687449s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-236314 -n embed-certs-236314
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-152344 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-397460 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-648641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-648641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (44.30241285s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-340756 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 09:21:43.471066    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/addons-491859/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-340756 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (28.133308946s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vtj9p" [e4c34dbc-f680-46c7-92ea-18532ff6d5f0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003971799s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-340756 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-340756 --alsologtostderr -v=3: (12.518598979s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vtj9p" [e4c34dbc-f680-46c7-92ea-18532ff6d5f0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003452975s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-236314 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-236314 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-648641 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [99fe2b46-4570-4a28-91ed-cea90f970719] Pending
helpers_test.go:352: "busybox" [99fe2b46-4570-4a28-91ed-cea90f970719] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [99fe2b46-4570-4a28-91ed-cea90f970719] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004342887s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-648641 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-340756 -n newest-cni-340756
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-340756 -n newest-cni-340756: exit status 7 (115.153705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-340756 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-340756 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-340756 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (12.127398395s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-340756 -n newest-cni-340756
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (47.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-204434 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-204434 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (47.658695177s)
--- PASS: TestNetworkPlugins/group/auto/Start (47.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-648641 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-648641 --alsologtostderr -v=3: (16.391522236s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-340756 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-204434 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-204434 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (41.343452364s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (50.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-204434 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-204434 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (50.736803044s)
--- PASS: TestNetworkPlugins/group/calico/Start (50.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-648641 -n default-k8s-diff-port-648641
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-648641 -n default-k8s-diff-port-648641: exit status 7 (100.260231ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-648641 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-648641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 09:22:41.354755    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/functional-290156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-648641 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.261735131s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-648641 -n default-k8s-diff-port-648641
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-204434 "pgrep -a kubelet"
I1101 09:22:53.655805    9414 config.go:182] Loaded profile config "auto-204434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-204434 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8fdvq" [8ca4c4bf-9b59-4833-94ff-87d1bea41648] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8fdvq" [8ca4c4bf-9b59-4833-94ff-87d1bea41648] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.038293428s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-zrvwl" [70f61249-fa1f-4989-aca0-3bec76ac6da0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004271693s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-204434 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-204434 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-204434 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-204434 "pgrep -a kubelet"
I1101 09:23:03.053586    9414 config.go:182] Loaded profile config "kindnet-204434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-204434 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2sk56" [adf8fd56-98ad-4acf-8def-be61247e4fbf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2sk56" [adf8fd56-98ad-4acf-8def-be61247e4fbf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004494194s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-204434 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-204434 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-204434 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9lh8h" [5fd3f60c-4a65-41fe-85e1-70897fbd88fb] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003549305s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-z666g" [cfbbd0d1-1653-4751-b3c0-c6755cc9a985] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-z666g" [cfbbd0d1-1653-4751-b3c0-c6755cc9a985] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003227438s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9lh8h" [5fd3f60c-4a65-41fe-85e1-70897fbd88fb] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004352088s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-648641 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-204434 "pgrep -a kubelet"
I1101 09:23:19.751792    9414 config.go:182] Loaded profile config "calico-204434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-204434 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6xrll" [da9c6998-b84b-4cf2-9562-5add4207042b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6xrll" [da9c6998-b84b-4cf2-9562-5add4207042b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004307607s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (49.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-204434 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-204434 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (49.446087994s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (49.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-648641 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-204434 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-204434 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-204434 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (65.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-204434 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-204434 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m5.300091809s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (65.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-204434 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-204434 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (58.148122622s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (63.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-204434 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-204434 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m3.442997507s)
--- PASS: TestNetworkPlugins/group/bridge/Start (63.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-204434 "pgrep -a kubelet"
I1101 09:24:12.947524    9414 config.go:182] Loaded profile config "custom-flannel-204434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-204434 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sm46q" [90262ccf-47b3-4e56-8853-94b61eec0399] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sm46q" [90262ccf-47b3-4e56-8853-94b61eec0399] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004426516s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-204434 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-204434 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-204434 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-rvvhn" [ea8334ff-98d9-48c5-bd39-fbb58a5af054] Running
E1101 09:24:34.771892    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:24:37.996516    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:24:38.003015    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:24:38.014428    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:24:38.035834    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:24:38.078167    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:24:38.159441    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:24:38.320988    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004041267s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-204434 "pgrep -a kubelet"
E1101 09:24:38.642273    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1101 09:24:38.892003    9414 config.go:182] Loaded profile config "enable-default-cni-204434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-204434 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8gzsm" [1f3d7f7f-f0f6-4903-9ca9-5a64cb4af223] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8gzsm" [1f3d7f7f-f0f6-4903-9ca9-5a64cb4af223] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.003982875s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-204434 "pgrep -a kubelet"
E1101 09:24:39.284391    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1101 09:24:39.463544    9414 config.go:182] Loaded profile config "flannel-204434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-204434 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sgql5" [0f8662f0-877b-428f-b5ae-8f2d10fc4566] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 09:24:39.893261    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/old-k8s-version-152344/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-sgql5" [0f8662f0-877b-428f-b5ae-8f2d10fc4566] Running
E1101 09:24:43.128323    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003731053s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-204434 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-204434 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-204434 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-204434 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-204434 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-204434 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-204434 "pgrep -a kubelet"
I1101 09:24:57.364395    9414 config.go:182] Loaded profile config "bridge-204434": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-204434 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cnfgl" [72c0579d-410d-47c7-8e5a-08a4c1b87109] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 09:24:58.492289    9414 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/no-preload-397460/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-cnfgl" [72c0579d-410d-47c7-8e5a-08a4c1b87109] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004232102s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-204434 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-204434 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-204434 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:35: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-366530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-366530
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-204434 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-204434

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-204434

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-204434

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-204434

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-204434

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-204434

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-204434

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-204434

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-204434

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-204434

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-204434

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-204434" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-204434" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:16:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-303094
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:17:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-846924
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:18:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: running-upgrade-274843
contexts:
- context:
cluster: cert-expiration-303094
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:16:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-303094
name: cert-expiration-303094
- context:
cluster: kubernetes-upgrade-846924
user: kubernetes-upgrade-846924
name: kubernetes-upgrade-846924
- context:
cluster: running-upgrade-274843
user: running-upgrade-274843
name: running-upgrade-274843
current-context: running-upgrade-274843
kind: Config
users:
- name: cert-expiration-303094
user:
client-certificate: /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/cert-expiration-303094/client.crt
client-key: /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/cert-expiration-303094/client.key
- name: kubernetes-upgrade-846924
user:
client-certificate: /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924/client.crt
client-key: /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924/client.key
- name: running-upgrade-274843
user:
client-certificate: /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/running-upgrade-274843/client.crt
client-key: /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/running-upgrade-274843/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-204434

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-204434"

                                                
                                                
----------------------- debugLogs end: kubenet-204434 [took: 3.366230165s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-204434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-204434
--- SKIP: TestNetworkPlugins/group/kubenet (3.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-204434 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-204434

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-204434

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-204434

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-204434

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-204434

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-204434

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-204434

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-204434

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-204434

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-204434

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-204434

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-204434" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-204434

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-204434

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-204434

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-204434

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-204434" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-204434" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:16:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-303094
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:17:51 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-846924
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21835-5913/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:18:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: running-upgrade-274843
contexts:
- context:
cluster: cert-expiration-303094
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:16:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-303094
name: cert-expiration-303094
- context:
cluster: kubernetes-upgrade-846924
user: kubernetes-upgrade-846924
name: kubernetes-upgrade-846924
- context:
cluster: running-upgrade-274843
user: running-upgrade-274843
name: running-upgrade-274843
current-context: running-upgrade-274843
kind: Config
users:
- name: cert-expiration-303094
user:
client-certificate: /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/cert-expiration-303094/client.crt
client-key: /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/cert-expiration-303094/client.key
- name: kubernetes-upgrade-846924
user:
client-certificate: /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924/client.crt
client-key: /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/kubernetes-upgrade-846924/client.key
- name: running-upgrade-274843
user:
client-certificate: /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/running-upgrade-274843/client.crt
client-key: /home/jenkins/minikube-integration/21835-5913/.minikube/profiles/running-upgrade-274843/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-204434

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-204434" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-204434"

                                                
                                                
----------------------- debugLogs end: cilium-204434 [took: 3.676689471s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-204434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-204434
--- SKIP: TestNetworkPlugins/group/cilium (3.86s)

                                                
                                    
Copied to clipboard